InferX — Serverless GPU Inference Platform for Production Workloads

Funcpod

Tenant Namespace Podname Model
public ActionAnalytics public/ActionAnalytics/CR-70B/79/137 CR-70B

State

State Time
Init 2026-03-01 23:29:14
PullingImage 2026-03-01 23:29:14
Creating 2026-03-01 23:29:14
Restoring 2026-03-01 23:29:25
Standby 2026-03-01 23:29:25
Resuming 2026-03-01 23:32:34
Ready 2026-03-01 23:32:36

Log

INFO 03-01 23:32:43 [logger.py:42] Received request cmpl-29b6ef1461b020e49ec6f7c032df6676-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO 03-01 23:32:43 [logger.py:42] Received request cmpl-602423502acf4ccda7c61a50149210fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:43 [async_llm.py:261] Added request cmpl-29b6ef1461b020e49ec6f7c032df6676-0.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:43 [async_llm.py:261] Added request cmpl-602423502acf4ccda7c61a50149210fe-0.
INFO 03-01 23:32:45 [logger.py:42] Received request cmpl-f9331809ec8d40b79e50d9b8c70a0035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:45 [async_llm.py:261] Added request cmpl-f9331809ec8d40b79e50d9b8c70a0035-0.
INFO 03-01 23:32:45 [loggers.py:116] Engine 000: Avg prompt throughput: 0.3 tokens/s, Avg generation throughput: 0.3 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:32:46 [logger.py:42] Received request cmpl-f6335b8218a649ce9f88a126cde04415-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:46 [async_llm.py:261] Added request cmpl-f6335b8218a649ce9f88a126cde04415-0.
INFO 03-01 23:32:47 [logger.py:42] Received request cmpl-b0dde1e791e74a5887525ef27a410fed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:47 [async_llm.py:261] Added request cmpl-b0dde1e791e74a5887525ef27a410fed-0.
INFO 03-01 23:32:48 [logger.py:42] Received request cmpl-60c4daf358544df1badd8e95bb1b1389-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:48 [async_llm.py:261] Added request cmpl-60c4daf358544df1badd8e95bb1b1389-0.
INFO 03-01 23:32:49 [logger.py:42] Received request cmpl-d8e3d831984e40a0b4a58e45c322f6a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:49 [async_llm.py:261] Added request cmpl-d8e3d831984e40a0b4a58e45c322f6a4-0.
INFO 03-01 23:32:50 [logger.py:42] Received request cmpl-4524c54c0a7e46169a671dc7bf02091d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:50 [async_llm.py:261] Added request cmpl-4524c54c0a7e46169a671dc7bf02091d-0.
INFO 03-01 23:32:51 [logger.py:42] Received request cmpl-777de2dfc80c48a38fd41a307d2de21a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:51 [async_llm.py:261] Added request cmpl-777de2dfc80c48a38fd41a307d2de21a-0.
INFO 03-01 23:32:53 [logger.py:42] Received request cmpl-e347c56b889f4e1c8016252b3bfb50d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:53 [async_llm.py:261] Added request cmpl-e347c56b889f4e1c8016252b3bfb50d6-0.
INFO 03-01 23:32:54 [logger.py:42] Received request cmpl-f10ffa4456c44e4e833263cf17060672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:54 [async_llm.py:261] Added request cmpl-f10ffa4456c44e4e833263cf17060672-0.
INFO 03-01 23:32:55 [logger.py:42] Received request cmpl-20f384a96fe1474992699dfe743911c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:55 [async_llm.py:261] Added request cmpl-20f384a96fe1474992699dfe743911c8-0.
INFO 03-01 23:32:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 61.4 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.2%, Prefix cache hit rate: 0.0%
INFO 03-01 23:32:56 [logger.py:42] Received request cmpl-47b3dabdac29407aadad57285bf28932-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:56 [async_llm.py:261] Added request cmpl-47b3dabdac29407aadad57285bf28932-0.
INFO 03-01 23:32:57 [logger.py:42] Received request cmpl-2ac9b10983814083b520e002ecd19816-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:57 [async_llm.py:261] Added request cmpl-2ac9b10983814083b520e002ecd19816-0.
INFO 03-01 23:32:58 [logger.py:42] Received request cmpl-6005540eaebc467e8aa05c4d5d9a43d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:58 [async_llm.py:261] Added request cmpl-6005540eaebc467e8aa05c4d5d9a43d6-0.
INFO 03-01 23:32:59 [logger.py:42] Received request cmpl-e8a7ab86778b4481a2a0328e42dd60fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:32:59 [async_llm.py:261] Added request cmpl-e8a7ab86778b4481a2a0328e42dd60fa-0.
INFO 03-01 23:33:00 [logger.py:42] Received request cmpl-4256f9cbda3641b99ba4d9adba309e15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:00 [async_llm.py:261] Added request cmpl-4256f9cbda3641b99ba4d9adba309e15-0.
INFO 03-01 23:33:01 [logger.py:42] Received request cmpl-0ab30dd41f2d4669a8ef4ca3c93de2aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:01 [async_llm.py:261] Added request cmpl-0ab30dd41f2d4669a8ef4ca3c93de2aa-0.
INFO 03-01 23:33:02 [logger.py:42] Received request cmpl-6de0b4aed512488eb06a93136c1488ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:02 [async_llm.py:261] Added request cmpl-6de0b4aed512488eb06a93136c1488ff-0.
INFO 03-01 23:33:04 [logger.py:42] Received request cmpl-db626c45fab44773b4476dd2ace87e99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:04 [async_llm.py:261] Added request cmpl-db626c45fab44773b4476dd2ace87e99-0.
INFO 03-01 23:33:05 [logger.py:42] Received request cmpl-4a5b8167bbc846c39926e7d74fef2c61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:05 [async_llm.py:261] Added request cmpl-4a5b8167bbc846c39926e7d74fef2c61-0.
INFO 03-01 23:33:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 40.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:33:06 [logger.py:42] Received request cmpl-fd9bb753d29042b5af4c499c4d052a2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:06 [async_llm.py:261] Added request cmpl-fd9bb753d29042b5af4c499c4d052a2a-0.
INFO 03-01 23:33:07 [logger.py:42] Received request cmpl-a7427279c4d24966a7d31f847dc05d91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:07 [async_llm.py:261] Added request cmpl-a7427279c4d24966a7d31f847dc05d91-0.
INFO 03-01 23:33:08 [logger.py:42] Received request cmpl-88772896757f45249ccde4c73664d2f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:08 [async_llm.py:261] Added request cmpl-88772896757f45249ccde4c73664d2f0-0.
INFO 03-01 23:33:09 [logger.py:42] Received request cmpl-c180996dd2b84e28a85090713f992dfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:09 [async_llm.py:261] Added request cmpl-c180996dd2b84e28a85090713f992dfe-0.
INFO 03-01 23:33:10 [logger.py:42] Received request cmpl-0361998199dc49d8995e2d9b7bb2409d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:10 [async_llm.py:261] Added request cmpl-0361998199dc49d8995e2d9b7bb2409d-0.
INFO 03-01 23:33:11 [logger.py:42] Received request cmpl-4f5caec2510f44188b2eae8b6b17d629-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:11 [async_llm.py:261] Added request cmpl-4f5caec2510f44188b2eae8b6b17d629-0.
INFO 03-01 23:33:12 [logger.py:42] Received request cmpl-4c617220dfcb44d5bbe1f9579673c22a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:12 [async_llm.py:261] Added request cmpl-4c617220dfcb44d5bbe1f9579673c22a-0.
INFO 03-01 23:33:13 [logger.py:42] Received request cmpl-d57f9919026a45eb916d02b5de45f951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:13 [async_llm.py:261] Added request cmpl-d57f9919026a45eb916d02b5de45f951-0.
INFO 03-01 23:33:15 [logger.py:42] Received request cmpl-4c316107ed10411191493eb47564a8e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:15 [async_llm.py:261] Added request cmpl-4c316107ed10411191493eb47564a8e4-0.
INFO 03-01 23:33:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:33:16 [logger.py:42] Received request cmpl-f7e41ddf7a6d452db72f3b20f0f02091-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:16 [async_llm.py:261] Added request cmpl-f7e41ddf7a6d452db72f3b20f0f02091-0.
INFO 03-01 23:33:17 [logger.py:42] Received request cmpl-92edd08c518a4e1891d26cb4c088295b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:17 [async_llm.py:261] Added request cmpl-92edd08c518a4e1891d26cb4c088295b-0.
INFO 03-01 23:33:18 [logger.py:42] Received request cmpl-48015e9762414e25bd31920a7b100bed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:18 [async_llm.py:261] Added request cmpl-48015e9762414e25bd31920a7b100bed-0.
INFO 03-01 23:33:19 [logger.py:42] Received request cmpl-c1c5f539978945a393ac3a576fe11d38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:19 [async_llm.py:261] Added request cmpl-c1c5f539978945a393ac3a576fe11d38-0.
INFO 03-01 23:33:20 [logger.py:42] Received request cmpl-d9cc35651bbe445bbda2c64d95a3a713-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:20 [async_llm.py:261] Added request cmpl-d9cc35651bbe445bbda2c64d95a3a713-0.
INFO 03-01 23:33:21 [logger.py:42] Received request cmpl-9b10bae3207a45b3912161510c78fbd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:21 [async_llm.py:261] Added request cmpl-9b10bae3207a45b3912161510c78fbd0-0.
INFO 03-01 23:33:22 [logger.py:42] Received request cmpl-f1f33fe593934f719f8e99883075aed2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:22 [async_llm.py:261] Added request cmpl-f1f33fe593934f719f8e99883075aed2-0.
INFO 03-01 23:33:23 [logger.py:42] Received request cmpl-41b61704ef0d451783ba1c0c4d8b0a85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:23 [async_llm.py:261] Added request cmpl-41b61704ef0d451783ba1c0c4d8b0a85-0.
INFO 03-01 23:33:24 [logger.py:42] Received request cmpl-ca63c4a6426742d98911601c17ba6a65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:24 [async_llm.py:261] Added request cmpl-ca63c4a6426742d98911601c17ba6a65-0.
INFO 03-01 23:33:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:33:26 [logger.py:42] Received request cmpl-9150cfb7c4f44e069151510cc184f7b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:26 [async_llm.py:261] Added request cmpl-9150cfb7c4f44e069151510cc184f7b8-0.
INFO 03-01 23:33:27 [logger.py:42] Received request cmpl-a43f10e38d4c43b08448a1245359fd6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:27 [async_llm.py:261] Added request cmpl-a43f10e38d4c43b08448a1245359fd6c-0.
INFO 03-01 23:33:28 [logger.py:42] Received request cmpl-c03b8ab005c748d6962bf63cda45e4be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:28 [async_llm.py:261] Added request cmpl-c03b8ab005c748d6962bf63cda45e4be-0.
INFO 03-01 23:33:29 [logger.py:42] Received request cmpl-7eeb3b4181304c1da86b09376f3cafc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:29 [async_llm.py:261] Added request cmpl-7eeb3b4181304c1da86b09376f3cafc0-0.
INFO 03-01 23:33:30 [logger.py:42] Received request cmpl-c471155c992b4ee3a05900da2d31452b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:30 [async_llm.py:261] Added request cmpl-c471155c992b4ee3a05900da2d31452b-0.
INFO 03-01 23:33:31 [logger.py:42] Received request cmpl-0e41d54d704b411a840d980265c7049c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:31 [async_llm.py:261] Added request cmpl-0e41d54d704b411a840d980265c7049c-0.
INFO 03-01 23:33:32 [logger.py:42] Received request cmpl-5252c3e3388d4a2d8fd4bb631f48fe25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:32 [async_llm.py:261] Added request cmpl-5252c3e3388d4a2d8fd4bb631f48fe25-0.
INFO 03-01 23:33:33 [logger.py:42] Received request cmpl-2d67358a01bf4b4c9da7eb6b4df7cc10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:33 [async_llm.py:261] Added request cmpl-2d67358a01bf4b4c9da7eb6b4df7cc10-0.
INFO 03-01 23:33:34 [logger.py:42] Received request cmpl-43a0990abda748089001d118feba85bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:34 [async_llm.py:261] Added request cmpl-43a0990abda748089001d118feba85bf-0.
INFO 03-01 23:33:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:33:35 [logger.py:42] Received request cmpl-cbc1a0a6c2684919a05cd65f23270352-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:35 [async_llm.py:261] Added request cmpl-cbc1a0a6c2684919a05cd65f23270352-0.
INFO 03-01 23:33:37 [logger.py:42] Received request cmpl-340c8e57b3ce4562ba612fc60f150332-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:37 [async_llm.py:261] Added request cmpl-340c8e57b3ce4562ba612fc60f150332-0.
INFO 03-01 23:33:38 [logger.py:42] Received request cmpl-8061560c94804cb590013bdf435a8bb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:38 [async_llm.py:261] Added request cmpl-8061560c94804cb590013bdf435a8bb6-0.
INFO 03-01 23:33:39 [logger.py:42] Received request cmpl-8fa0035ee1e94ac1b3b16d33ce800ecc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:39 [async_llm.py:261] Added request cmpl-8fa0035ee1e94ac1b3b16d33ce800ecc-0.
INFO 03-01 23:33:40 [logger.py:42] Received request cmpl-e223b4df3e5b4df5a81443bb37742f0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:40 [async_llm.py:261] Added request cmpl-e223b4df3e5b4df5a81443bb37742f0c-0.
INFO 03-01 23:33:41 [logger.py:42] Received request cmpl-37710f96411e4017961aa0e2722b1c4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:41 [async_llm.py:261] Added request cmpl-37710f96411e4017961aa0e2722b1c4e-0.
INFO 03-01 23:33:42 [logger.py:42] Received request cmpl-01d2605bc10641198e9ce2e7d4e5f091-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:42 [async_llm.py:261] Added request cmpl-01d2605bc10641198e9ce2e7d4e5f091-0.
INFO 03-01 23:33:43 [logger.py:42] Received request cmpl-3e07588711c24400b2174352e55c1dac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:43 [async_llm.py:261] Added request cmpl-3e07588711c24400b2174352e55c1dac-0.
INFO 03-01 23:33:44 [logger.py:42] Received request cmpl-6570df5a4d3d4c0198858f867a37e5a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:44 [async_llm.py:261] Added request cmpl-6570df5a4d3d4c0198858f867a37e5a1-0.
INFO 03-01 23:33:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:33:45 [logger.py:42] Received request cmpl-5e28425605d74d568a9648e02a895622-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:45 [async_llm.py:261] Added request cmpl-5e28425605d74d568a9648e02a895622-0.
INFO 03-01 23:33:46 [logger.py:42] Received request cmpl-c76f7877e4cf42979ef957bc016f1d3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:46 [async_llm.py:261] Added request cmpl-c76f7877e4cf42979ef957bc016f1d3b-0.
INFO 03-01 23:33:47 [logger.py:42] Received request cmpl-f7873caee0974769950513e700a9af72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:47 [async_llm.py:261] Added request cmpl-f7873caee0974769950513e700a9af72-0.
INFO 03-01 23:33:49 [logger.py:42] Received request cmpl-efb8d9ca4fc8490cad644b4bf6fb34ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:49 [async_llm.py:261] Added request cmpl-efb8d9ca4fc8490cad644b4bf6fb34ff-0.
INFO 03-01 23:33:50 [logger.py:42] Received request cmpl-97f512322fac41a38510536eb9a57bbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:50 [async_llm.py:261] Added request cmpl-97f512322fac41a38510536eb9a57bbe-0.
INFO 03-01 23:33:51 [logger.py:42] Received request cmpl-1e0e3daafbcf4dfdbc453c8d71c40f2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:51 [async_llm.py:261] Added request cmpl-1e0e3daafbcf4dfdbc453c8d71c40f2e-0.
INFO 03-01 23:33:52 [logger.py:42] Received request cmpl-a078d98df5dd4807b3b6f8ee5b113c52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:52 [async_llm.py:261] Added request cmpl-a078d98df5dd4807b3b6f8ee5b113c52-0.
INFO 03-01 23:33:53 [logger.py:42] Received request cmpl-856b202410ff449dbd1372a5510f205d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:53 [async_llm.py:261] Added request cmpl-856b202410ff449dbd1372a5510f205d-0.
INFO 03-01 23:33:54 [logger.py:42] Received request cmpl-eee2436604034a30af3d465f05a989eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:54 [async_llm.py:261] Added request cmpl-eee2436604034a30af3d465f05a989eb-0.
INFO 03-01 23:33:55 [logger.py:42] Received request cmpl-074bb24b18f842df9d25a45d537d063d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:55 [async_llm.py:261] Added request cmpl-074bb24b18f842df9d25a45d537d063d-0.
INFO 03-01 23:33:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:33:56 [logger.py:42] Received request cmpl-39ef03863d01469e916190dc30931df1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:56 [async_llm.py:261] Added request cmpl-39ef03863d01469e916190dc30931df1-0.
INFO 03-01 23:33:57 [logger.py:42] Received request cmpl-eead35a200ea488b9824e4b66cc0a011-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:57 [async_llm.py:261] Added request cmpl-eead35a200ea488b9824e4b66cc0a011-0.
INFO 03-01 23:33:58 [logger.py:42] Received request cmpl-4dd797d64ebb4f61aa2257f77992490c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:58 [async_llm.py:261] Added request cmpl-4dd797d64ebb4f61aa2257f77992490c-0.
INFO 03-01 23:33:59 [logger.py:42] Received request cmpl-a16249db8aec4f249f906ea8898f94d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:33:59 [async_llm.py:261] Added request cmpl-a16249db8aec4f249f906ea8898f94d5-0.
INFO 03-01 23:34:01 [logger.py:42] Received request cmpl-583d5efbeebc49f9bf40c6662f1dd003-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:01 [async_llm.py:261] Added request cmpl-583d5efbeebc49f9bf40c6662f1dd003-0.
INFO 03-01 23:34:02 [logger.py:42] Received request cmpl-3bc09c84339b40179034a3912075d140-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:02 [async_llm.py:261] Added request cmpl-3bc09c84339b40179034a3912075d140-0.
INFO 03-01 23:34:03 [logger.py:42] Received request cmpl-9f1bbedcd337448ba57fb494a2ede99c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:03 [async_llm.py:261] Added request cmpl-9f1bbedcd337448ba57fb494a2ede99c-0.
INFO 03-01 23:34:04 [logger.py:42] Received request cmpl-53a2e5ab82ff4b8f9959b5dddbfec6a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:04 [async_llm.py:261] Added request cmpl-53a2e5ab82ff4b8f9959b5dddbfec6a4-0.
INFO 03-01 23:34:05 [logger.py:42] Received request cmpl-f5b5f82b466042009f718a37197a83fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:05 [async_llm.py:261] Added request cmpl-f5b5f82b466042009f718a37197a83fc-0.
INFO 03-01 23:34:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:34:06 [logger.py:42] Received request cmpl-7a1d84ccafab41218e0b90c2f4623cdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:06 [async_llm.py:261] Added request cmpl-7a1d84ccafab41218e0b90c2f4623cdb-0.
INFO 03-01 23:34:07 [logger.py:42] Received request cmpl-8f72159a509ea802627a43fb43b5136e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.4:123 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:07 [async_llm.py:261] Added request cmpl-8f72159a509ea802627a43fb43b5136e-0.
INFO 03-01 23:34:07 [logger.py:42] Received request cmpl-aff3d36c9cb145f0b4c5d72a8f29a1d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:07 [async_llm.py:261] Added request cmpl-aff3d36c9cb145f0b4c5d72a8f29a1d6-0.
INFO 03-01 23:34:08 [logger.py:42] Received request cmpl-183ff801475345538cab63bec4dd6097-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:08 [async_llm.py:261] Added request cmpl-183ff801475345538cab63bec4dd6097-0.
INFO 03-01 23:34:09 [logger.py:42] Received request cmpl-d509d1a8e4a34f48b4c4175ddb449bd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:09 [async_llm.py:261] Added request cmpl-d509d1a8e4a34f48b4c4175ddb449bd1-0.
INFO 03-01 23:34:10 [logger.py:42] Received request cmpl-5bb0363371474ac9ab6cae2b2cb1d0d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:10 [async_llm.py:261] Added request cmpl-5bb0363371474ac9ab6cae2b2cb1d0d6-0.
INFO 03-01 23:34:12 [logger.py:42] Received request cmpl-d06d84383fdf43f4a249f72a19129936-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:12 [async_llm.py:261] Added request cmpl-d06d84383fdf43f4a249f72a19129936-0.
INFO 03-01 23:34:13 [logger.py:42] Received request cmpl-d971bb296d254943830a467d29e83ad2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:13 [async_llm.py:261] Added request cmpl-d971bb296d254943830a467d29e83ad2-0.
INFO 03-01 23:34:14 [logger.py:42] Received request cmpl-3907438696d44d58bec11282198d5de0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:14 [async_llm.py:261] Added request cmpl-3907438696d44d58bec11282198d5de0-0.
INFO 03-01 23:34:15 [logger.py:42] Received request cmpl-cc46ca5709f54a08974da5c558abab9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:15 [async_llm.py:261] Added request cmpl-cc46ca5709f54a08974da5c558abab9d-0.
INFO 03-01 23:34:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 51.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.1%, Prefix cache hit rate: 0.0%
INFO 03-01 23:34:16 [logger.py:42] Received request cmpl-4fdbea22e0aa483fb0dcefd78ddb838e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:16 [async_llm.py:261] Added request cmpl-4fdbea22e0aa483fb0dcefd78ddb838e-0.
INFO 03-01 23:34:17 [logger.py:42] Received request cmpl-56dc90d60ed44d2abf95f7b230ca44b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:17 [async_llm.py:261] Added request cmpl-56dc90d60ed44d2abf95f7b230ca44b1-0.
INFO 03-01 23:34:18 [logger.py:42] Received request cmpl-49548b0e2f7f4ef0aaed61a43dd6619c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:18 [async_llm.py:261] Added request cmpl-49548b0e2f7f4ef0aaed61a43dd6619c-0.
INFO 03-01 23:34:19 [logger.py:42] Received request cmpl-aa81f8b933264e258d823e80a7c88b83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:19 [async_llm.py:261] Added request cmpl-aa81f8b933264e258d823e80a7c88b83-0.
INFO 03-01 23:34:20 [logger.py:42] Received request cmpl-7309acca2742447b87bbd637f2158080-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:20 [async_llm.py:261] Added request cmpl-7309acca2742447b87bbd637f2158080-0.
INFO 03-01 23:34:21 [logger.py:42] Received request cmpl-31c16f17a52548c58efbcfcbd6c69481-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:21 [async_llm.py:261] Added request cmpl-31c16f17a52548c58efbcfcbd6c69481-0.
INFO 03-01 23:34:23 [logger.py:42] Received request cmpl-f9c266aef349406ca1612bac6e81a523-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:23 [async_llm.py:261] Added request cmpl-f9c266aef349406ca1612bac6e81a523-0.
INFO 03-01 23:34:24 [logger.py:42] Received request cmpl-4c7c1ef0aaf54a2f9ad8c03fdd482f28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:24 [async_llm.py:261] Added request cmpl-4c7c1ef0aaf54a2f9ad8c03fdd482f28-0.
INFO 03-01 23:34:25 [logger.py:42] Received request cmpl-ddd199067e9a45b7a62557b4454623cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:25 [async_llm.py:261] Added request cmpl-ddd199067e9a45b7a62557b4454623cb-0.
INFO 03-01 23:34:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 57.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:34:26 [logger.py:42] Received request cmpl-d4aae44af32e4431a3bc02647aca141d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:26 [async_llm.py:261] Added request cmpl-d4aae44af32e4431a3bc02647aca141d-0.
INFO 03-01 23:34:27 [logger.py:42] Received request cmpl-ad8b0fab27d84a58a1f489940073bb13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:27 [async_llm.py:261] Added request cmpl-ad8b0fab27d84a58a1f489940073bb13-0.
INFO 03-01 23:34:28 [logger.py:42] Received request cmpl-da200465651f49369e44d08617172336-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:28 [async_llm.py:261] Added request cmpl-da200465651f49369e44d08617172336-0.
INFO 03-01 23:34:29 [logger.py:42] Received request cmpl-cfec8661e79847f3a30e3c3504724fb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:29 [async_llm.py:261] Added request cmpl-cfec8661e79847f3a30e3c3504724fb5-0.
INFO 03-01 23:34:30 [logger.py:42] Received request cmpl-b063403e10ff4b8088d1a0289527b892-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:30 [async_llm.py:261] Added request cmpl-b063403e10ff4b8088d1a0289527b892-0.
INFO 03-01 23:34:31 [logger.py:42] Received request cmpl-ec0dea2c959d421c8a93356773055a6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:31 [async_llm.py:261] Added request cmpl-ec0dea2c959d421c8a93356773055a6c-0.
INFO 03-01 23:34:32 [logger.py:42] Received request cmpl-d9da468b0eac4a47aec1b3e0f60c68f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:32 [async_llm.py:261] Added request cmpl-d9da468b0eac4a47aec1b3e0f60c68f4-0.
INFO 03-01 23:34:33 [logger.py:42] Received request cmpl-a0050b4694ae43d7a65e86b3448f6c33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:33 [async_llm.py:261] Added request cmpl-a0050b4694ae43d7a65e86b3448f6c33-0.
INFO 03-01 23:34:35 [logger.py:42] Received request cmpl-49d3d76141f341e6b2b147045d769b34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:35 [async_llm.py:261] Added request cmpl-49d3d76141f341e6b2b147045d769b34-0.
INFO 03-01 23:34:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:34:36 [logger.py:42] Received request cmpl-9a0392cff6324fa790cb0d44a6ac123b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:36 [async_llm.py:261] Added request cmpl-9a0392cff6324fa790cb0d44a6ac123b-0.
INFO 03-01 23:34:37 [logger.py:42] Received request cmpl-337bf340e0274bcdbdf5c21c9b1daaa5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:37 [async_llm.py:261] Added request cmpl-337bf340e0274bcdbdf5c21c9b1daaa5-0.
INFO 03-01 23:34:38 [logger.py:42] Received request cmpl-c9c0bf732290454084a95ad08b208c9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:38 [async_llm.py:261] Added request cmpl-c9c0bf732290454084a95ad08b208c9a-0.
INFO 03-01 23:34:39 [logger.py:42] Received request cmpl-3b4a510f9b7f43f78b8a9bdf0c4211dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:39 [async_llm.py:261] Added request cmpl-3b4a510f9b7f43f78b8a9bdf0c4211dd-0.
INFO 03-01 23:34:40 [logger.py:42] Received request cmpl-1ec86879756d43a8a6ed69378f4d2214-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:40 [async_llm.py:261] Added request cmpl-1ec86879756d43a8a6ed69378f4d2214-0.
INFO 03-01 23:34:41 [logger.py:42] Received request cmpl-0c36ff96600b4317b21772dc7383e5aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:41 [async_llm.py:261] Added request cmpl-0c36ff96600b4317b21772dc7383e5aa-0.
INFO 03-01 23:34:42 [logger.py:42] Received request cmpl-905f71d820ac4c788a94c102a1d3d7be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:42 [async_llm.py:261] Added request cmpl-905f71d820ac4c788a94c102a1d3d7be-0.
INFO 03-01 23:34:43 [logger.py:42] Received request cmpl-b784e2e1215144d68b3afdcd8eac8eec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:43 [async_llm.py:261] Added request cmpl-b784e2e1215144d68b3afdcd8eac8eec-0.
INFO 03-01 23:34:44 [logger.py:42] Received request cmpl-67bf8277957c42c5b6b65cc38fe54a3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:44 [async_llm.py:261] Added request cmpl-67bf8277957c42c5b6b65cc38fe54a3d-0.
INFO 03-01 23:34:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:34:46 [logger.py:42] Received request cmpl-190ec4bd0b7e43d89c13de834a433cd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:46 [async_llm.py:261] Added request cmpl-190ec4bd0b7e43d89c13de834a433cd1-0.
INFO 03-01 23:34:47 [logger.py:42] Received request cmpl-c3940759d5a248a1b88e9cdd93abb981-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:47 [async_llm.py:261] Added request cmpl-c3940759d5a248a1b88e9cdd93abb981-0.
INFO 03-01 23:34:48 [logger.py:42] Received request cmpl-6513ad4c3dc543718684d09d4efce02b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:48 [async_llm.py:261] Added request cmpl-6513ad4c3dc543718684d09d4efce02b-0.
INFO 03-01 23:34:49 [logger.py:42] Received request cmpl-715aa36b99644dbc95e864145e68dd2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:49 [async_llm.py:261] Added request cmpl-715aa36b99644dbc95e864145e68dd2a-0.
INFO 03-01 23:34:50 [logger.py:42] Received request cmpl-bac39b2f6ed8402fb1987462b2003f3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:50 [async_llm.py:261] Added request cmpl-bac39b2f6ed8402fb1987462b2003f3d-0.
INFO 03-01 23:34:51 [logger.py:42] Received request cmpl-76326f6d59444942aaa16339310f61b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:51 [async_llm.py:261] Added request cmpl-76326f6d59444942aaa16339310f61b1-0.
INFO 03-01 23:34:52 [logger.py:42] Received request cmpl-ebe6d1a345e04b7e9afb18f788859929-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:52 [async_llm.py:261] Added request cmpl-ebe6d1a345e04b7e9afb18f788859929-0.
INFO 03-01 23:34:53 [logger.py:42] Received request cmpl-629b346aa858482f953f0fee3f0d2f38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:53 [async_llm.py:261] Added request cmpl-629b346aa858482f953f0fee3f0d2f38-0.
INFO 03-01 23:34:54 [logger.py:42] Received request cmpl-28df8430d1dc482b94469747c342a152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:54 [async_llm.py:261] Added request cmpl-28df8430d1dc482b94469747c342a152-0.
INFO 03-01 23:34:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:34:55 [logger.py:42] Received request cmpl-37126b3688ac4b34ab96d3740f33562f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:55 [async_llm.py:261] Added request cmpl-37126b3688ac4b34ab96d3740f33562f-0.
INFO 03-01 23:34:56 [logger.py:42] Received request cmpl-f2880f12a31944d18dbdf9519a1a2387-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:56 [async_llm.py:261] Added request cmpl-f2880f12a31944d18dbdf9519a1a2387-0.
INFO 03-01 23:34:58 [logger.py:42] Received request cmpl-d606ae97f2bd48739f16c1f5ee9921af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:58 [async_llm.py:261] Added request cmpl-d606ae97f2bd48739f16c1f5ee9921af-0.
INFO 03-01 23:34:59 [logger.py:42] Received request cmpl-7335f1ec2f624b3981a02daa2d73a49a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:34:59 [async_llm.py:261] Added request cmpl-7335f1ec2f624b3981a02daa2d73a49a-0.
INFO 03-01 23:35:00 [logger.py:42] Received request cmpl-6db192c4385c41ffa12c822c80964eac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:00 [async_llm.py:261] Added request cmpl-6db192c4385c41ffa12c822c80964eac-0.
INFO 03-01 23:35:01 [logger.py:42] Received request cmpl-7ac78505192b4ad29a572efa26e31e10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:01 [async_llm.py:261] Added request cmpl-7ac78505192b4ad29a572efa26e31e10-0.
INFO 03-01 23:35:02 [logger.py:42] Received request cmpl-c1d4040ee6ba48deae2cbf930e12c9e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:02 [async_llm.py:261] Added request cmpl-c1d4040ee6ba48deae2cbf930e12c9e1-0.
INFO 03-01 23:35:03 [logger.py:42] Received request cmpl-2ee815baf9db49d3bf9dac92e89c91ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:03 [async_llm.py:261] Added request cmpl-2ee815baf9db49d3bf9dac92e89c91ee-0.
INFO 03-01 23:35:04 [logger.py:42] Received request cmpl-1b75a156eb10420e8edf4c0ce1a59fdf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:04 [async_llm.py:261] Added request cmpl-1b75a156eb10420e8edf4c0ce1a59fdf-0.
INFO 03-01 23:35:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:05 [logger.py:42] Received request cmpl-dbb4b03709354a218ddca5840c7ad32c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:05 [async_llm.py:261] Added request cmpl-dbb4b03709354a218ddca5840c7ad32c-0.
INFO 03-01 23:35:06 [logger.py:42] Received request cmpl-fdb98e8edccf4cbb87bb5b3d50c3c669-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:06 [async_llm.py:261] Added request cmpl-fdb98e8edccf4cbb87bb5b3d50c3c669-0.
INFO 03-01 23:35:07 [logger.py:42] Received request cmpl-3fcb1c203ebc43db9797700b70145fd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:07 [async_llm.py:261] Added request cmpl-3fcb1c203ebc43db9797700b70145fd6-0.
INFO 03-01 23:35:09 [logger.py:42] Received request cmpl-985d570097be47c6bbaff8a5239dee95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:09 [async_llm.py:261] Added request cmpl-985d570097be47c6bbaff8a5239dee95-0.
INFO 03-01 23:35:10 [logger.py:42] Received request cmpl-b7dd764eb83b4c40a24c72d627ce315b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:10 [async_llm.py:261] Added request cmpl-b7dd764eb83b4c40a24c72d627ce315b-0.
INFO 03-01 23:35:11 [logger.py:42] Received request cmpl-451cdf638f134cb6a631aa9320fb63f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:11 [async_llm.py:261] Added request cmpl-451cdf638f134cb6a631aa9320fb63f4-0.
INFO 03-01 23:35:12 [logger.py:42] Received request cmpl-84bee4a0f9354647b74bd12fb20ebacd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:12 [async_llm.py:261] Added request cmpl-84bee4a0f9354647b74bd12fb20ebacd-0.
INFO 03-01 23:35:13 [logger.py:42] Received request cmpl-a86130ecad9647688aafd419a8a6c529-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:13 [async_llm.py:261] Added request cmpl-a86130ecad9647688aafd419a8a6c529-0.
INFO 03-01 23:35:14 [logger.py:42] Received request cmpl-7674378d71c8441282ee7ae8b50ee317-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:14 [async_llm.py:261] Added request cmpl-7674378d71c8441282ee7ae8b50ee317-0.
INFO 03-01 23:35:15 [logger.py:42] Received request cmpl-e4a83718832e483a8017cee9a90f7f34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:15 [async_llm.py:261] Added request cmpl-e4a83718832e483a8017cee9a90f7f34-0.
INFO 03-01 23:35:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:16 [logger.py:42] Received request cmpl-b7300acb2ad4486791906168adf2bb7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:16 [async_llm.py:261] Added request cmpl-b7300acb2ad4486791906168adf2bb7b-0.
INFO 03-01 23:35:17 [logger.py:42] Received request cmpl-d12eb75bc4814327aa2519af4b817139-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:17 [async_llm.py:261] Added request cmpl-d12eb75bc4814327aa2519af4b817139-0.
INFO 03-01 23:35:18 [logger.py:42] Received request cmpl-22f0d090b1b6411088f7201431aecd6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:18 [async_llm.py:261] Added request cmpl-22f0d090b1b6411088f7201431aecd6e-0.
INFO 03-01 23:35:19 [logger.py:42] Received request cmpl-6b87cde6dd7b4e40aee45095cb726704-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:20 [async_llm.py:261] Added request cmpl-6b87cde6dd7b4e40aee45095cb726704-0.
INFO 03-01 23:35:21 [logger.py:42] Received request cmpl-dc9867e2c4be468fba8960f1fd23f476-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:21 [async_llm.py:261] Added request cmpl-dc9867e2c4be468fba8960f1fd23f476-0.
INFO 03-01 23:35:22 [logger.py:42] Received request cmpl-f15ef0edb715441d96220b7ad4dde52e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:22 [async_llm.py:261] Added request cmpl-f15ef0edb715441d96220b7ad4dde52e-0.
INFO 03-01 23:35:23 [logger.py:42] Received request cmpl-78b2d1ca06f243afb78c4ec5ea30843a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:23 [async_llm.py:261] Added request cmpl-78b2d1ca06f243afb78c4ec5ea30843a-0.
INFO 03-01 23:35:24 [logger.py:42] Received request cmpl-d9613b904c474dfe8178aadc1a12444b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:24 [async_llm.py:261] Added request cmpl-d9613b904c474dfe8178aadc1a12444b-0.
INFO 03-01 23:35:25 [logger.py:42] Received request cmpl-9f1077e9843e4a3e98448dc4abb22a2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:25 [async_llm.py:261] Added request cmpl-9f1077e9843e4a3e98448dc4abb22a2e-0.
INFO 03-01 23:35:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:26 [logger.py:42] Received request cmpl-f6e86434032c4c8bad7dfb1f6092546d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:26 [async_llm.py:261] Added request cmpl-f6e86434032c4c8bad7dfb1f6092546d-0.
INFO 03-01 23:35:27 [logger.py:42] Received request cmpl-c019918ba53e4e2e8c43b597d843ffb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:27 [async_llm.py:261] Added request cmpl-c019918ba53e4e2e8c43b597d843ffb9-0.
INFO 03-01 23:35:28 [logger.py:42] Received request cmpl-cefaeaadd29d4ab1bb25408f9597b3b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:28 [async_llm.py:261] Added request cmpl-cefaeaadd29d4ab1bb25408f9597b3b7-0.
INFO 03-01 23:35:29 [logger.py:42] Received request cmpl-f25843f3906d4cf6954801be083b210f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:29 [async_llm.py:261] Added request cmpl-f25843f3906d4cf6954801be083b210f-0.
INFO 03-01 23:35:30 [logger.py:42] Received request cmpl-3c94d74ccc5a4e6c8fa9ebb89be9002b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:30 [async_llm.py:261] Added request cmpl-3c94d74ccc5a4e6c8fa9ebb89be9002b-0.
INFO 03-01 23:35:32 [logger.py:42] Received request cmpl-f44778c087cb40558b9574d1bc2f5b9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:32 [async_llm.py:261] Added request cmpl-f44778c087cb40558b9574d1bc2f5b9c-0.
INFO 03-01 23:35:33 [logger.py:42] Received request cmpl-de10a78c1baf4265b8f8685681e4f181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:33 [async_llm.py:261] Added request cmpl-de10a78c1baf4265b8f8685681e4f181-0.
INFO 03-01 23:35:34 [logger.py:42] Received request cmpl-aa8c95d11d894fa6a37f307605bdf7bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:34 [async_llm.py:261] Added request cmpl-aa8c95d11d894fa6a37f307605bdf7bc-0.
INFO 03-01 23:35:35 [logger.py:42] Received request cmpl-29e43550d43c47749c3c247c0016cf86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:35 [async_llm.py:261] Added request cmpl-29e43550d43c47749c3c247c0016cf86-0.
INFO 03-01 23:35:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:36 [logger.py:42] Received request cmpl-91d34b6fe6b24825b4f6c7f742718d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:36 [async_llm.py:261] Added request cmpl-91d34b6fe6b24825b4f6c7f742718d16-0.
INFO 03-01 23:35:37 [logger.py:42] Received request cmpl-301ffed708614bdba6ab975c69ee3292-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:37 [async_llm.py:261] Added request cmpl-301ffed708614bdba6ab975c69ee3292-0.
INFO 03-01 23:35:38 [logger.py:42] Received request cmpl-35e2d97c40c34548b0746cdad5c7159e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:38 [async_llm.py:261] Added request cmpl-35e2d97c40c34548b0746cdad5c7159e-0.
INFO 03-01 23:35:39 [logger.py:42] Received request cmpl-bc057eba589c402cb9c598a877c567e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:39 [async_llm.py:261] Added request cmpl-bc057eba589c402cb9c598a877c567e7-0.
INFO 03-01 23:35:40 [logger.py:42] Received request cmpl-9a86f6a18eb84160ade3953681b7a296-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:40 [async_llm.py:261] Added request cmpl-9a86f6a18eb84160ade3953681b7a296-0.
INFO 03-01 23:35:41 [logger.py:42] Received request cmpl-50e162390adc4e65b3af7775981b851a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:41 [async_llm.py:261] Added request cmpl-50e162390adc4e65b3af7775981b851a-0.
INFO 03-01 23:35:42 [logger.py:42] Received request cmpl-e155a3be3a10431d8bbdb21b2de9cc96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:42 [async_llm.py:261] Added request cmpl-e155a3be3a10431d8bbdb21b2de9cc96-0.
INFO 03-01 23:35:44 [logger.py:42] Received request cmpl-4171f3b2225c48198476270856d829ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:44 [async_llm.py:261] Added request cmpl-4171f3b2225c48198476270856d829ea-0.
INFO 03-01 23:35:45 [logger.py:42] Received request cmpl-10ebee6639dc4789aef38bfb602a1ff9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:45 [async_llm.py:261] Added request cmpl-10ebee6639dc4789aef38bfb602a1ff9-0.
INFO 03-01 23:35:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:46 [logger.py:42] Received request cmpl-4ee162d5021343a4b0e1ec1f86c005b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:46 [async_llm.py:261] Added request cmpl-4ee162d5021343a4b0e1ec1f86c005b2-0.
INFO 03-01 23:35:47 [logger.py:42] Received request cmpl-c03a11fe8f3948899d22b7d952390bc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:47 [async_llm.py:261] Added request cmpl-c03a11fe8f3948899d22b7d952390bc4-0.
INFO 03-01 23:35:48 [logger.py:42] Received request cmpl-835154c7fdf44db6aaf3768cffe0d160-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:48 [async_llm.py:261] Added request cmpl-835154c7fdf44db6aaf3768cffe0d160-0.
INFO 03-01 23:35:49 [logger.py:42] Received request cmpl-b5323262751a4b85a71b39fc912a1022-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:49 [async_llm.py:261] Added request cmpl-b5323262751a4b85a71b39fc912a1022-0.
INFO 03-01 23:35:50 [logger.py:42] Received request cmpl-a1f38c2a87c44c3a984e0db4a48911bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:50 [async_llm.py:261] Added request cmpl-a1f38c2a87c44c3a984e0db4a48911bd-0.
INFO 03-01 23:35:51 [logger.py:42] Received request cmpl-e32d337422bf449da2898601d4dc1a5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:51 [async_llm.py:261] Added request cmpl-e32d337422bf449da2898601d4dc1a5e-0.
INFO 03-01 23:35:52 [logger.py:42] Received request cmpl-f454a9cc438343ec8830130d267aecbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:52 [async_llm.py:261] Added request cmpl-f454a9cc438343ec8830130d267aecbe-0.
INFO 03-01 23:35:53 [logger.py:42] Received request cmpl-e83e003ca38a484a81499a415968677c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:53 [async_llm.py:261] Added request cmpl-e83e003ca38a484a81499a415968677c-0.
INFO 03-01 23:35:55 [logger.py:42] Received request cmpl-32a6d7ebb2b8457eaedc48333feec039-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:55 [async_llm.py:261] Added request cmpl-32a6d7ebb2b8457eaedc48333feec039-0.
INFO 03-01 23:35:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:35:56 [logger.py:42] Received request cmpl-072b5c1601354cb3872a2dfc0cbd7891-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:56 [async_llm.py:261] Added request cmpl-072b5c1601354cb3872a2dfc0cbd7891-0.
INFO 03-01 23:35:57 [logger.py:42] Received request cmpl-a918cb27e9234fd09f3ca8d78abf3658-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:57 [async_llm.py:261] Added request cmpl-a918cb27e9234fd09f3ca8d78abf3658-0.
INFO 03-01 23:35:58 [logger.py:42] Received request cmpl-449b0202bc5f41668197dce759ff4ab5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:58 [async_llm.py:261] Added request cmpl-449b0202bc5f41668197dce759ff4ab5-0.
INFO 03-01 23:35:59 [logger.py:42] Received request cmpl-0eb644f984c042b3acc480431b2c2277-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:35:59 [async_llm.py:261] Added request cmpl-0eb644f984c042b3acc480431b2c2277-0.
INFO 03-01 23:36:00 [logger.py:42] Received request cmpl-c0dfca7ce8474c4e8393304225e93df8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:00 [async_llm.py:261] Added request cmpl-c0dfca7ce8474c4e8393304225e93df8-0.
INFO 03-01 23:36:01 [logger.py:42] Received request cmpl-369400b4a0534d228065e1e765e69298-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:01 [async_llm.py:261] Added request cmpl-369400b4a0534d228065e1e765e69298-0.
INFO 03-01 23:36:02 [logger.py:42] Received request cmpl-9a6f5c47920f41b9a11130159a0781ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:02 [async_llm.py:261] Added request cmpl-9a6f5c47920f41b9a11130159a0781ec-0.
INFO 03-01 23:36:03 [logger.py:42] Received request cmpl-6cc51e4720e149ec89ec9d829c39800d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:03 [async_llm.py:261] Added request cmpl-6cc51e4720e149ec89ec9d829c39800d-0.
INFO 03-01 23:36:04 [logger.py:42] Received request cmpl-ec196e1f11ab499eba7bf1f9887dcc52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:04 [async_llm.py:261] Added request cmpl-ec196e1f11ab499eba7bf1f9887dcc52-0.
INFO 03-01 23:36:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:05 [logger.py:42] Received request cmpl-ca8d08e4b55f4e589a352b0ce7e0dd9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:05 [async_llm.py:261] Added request cmpl-ca8d08e4b55f4e589a352b0ce7e0dd9c-0.
INFO 03-01 23:36:07 [logger.py:42] Received request cmpl-c66bb8e093ca42f3996baa489ce6a945-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:07 [async_llm.py:261] Added request cmpl-c66bb8e093ca42f3996baa489ce6a945-0.
INFO 03-01 23:36:08 [logger.py:42] Received request cmpl-2937bf2483eb488eae5e64a7f8849d8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:08 [async_llm.py:261] Added request cmpl-2937bf2483eb488eae5e64a7f8849d8e-0.
INFO 03-01 23:36:09 [logger.py:42] Received request cmpl-ebcfccbf15e64e59abec6370fd55b6c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:09 [async_llm.py:261] Added request cmpl-ebcfccbf15e64e59abec6370fd55b6c0-0.
INFO 03-01 23:36:10 [logger.py:42] Received request cmpl-8d2b8e57b2834f9291b8602028692db1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:10 [async_llm.py:261] Added request cmpl-8d2b8e57b2834f9291b8602028692db1-0.
INFO 03-01 23:36:11 [logger.py:42] Received request cmpl-2318b056e7264d79a7b4993700438f14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:11 [async_llm.py:261] Added request cmpl-2318b056e7264d79a7b4993700438f14-0.
INFO 03-01 23:36:12 [logger.py:42] Received request cmpl-d8dfaa966cae43638c67e8d9e703b347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:12 [async_llm.py:261] Added request cmpl-d8dfaa966cae43638c67e8d9e703b347-0.
INFO 03-01 23:36:13 [logger.py:42] Received request cmpl-7c52b0b981bd4a7b80318fa61162f355-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:13 [async_llm.py:261] Added request cmpl-7c52b0b981bd4a7b80318fa61162f355-0.
INFO 03-01 23:36:14 [logger.py:42] Received request cmpl-12859305082e424a9e3e7600f360595f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:14 [async_llm.py:261] Added request cmpl-12859305082e424a9e3e7600f360595f-0.
INFO 03-01 23:36:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:15 [logger.py:42] Received request cmpl-7c7d34a94e694055afbe12cd7fb7978a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:15 [async_llm.py:261] Added request cmpl-7c7d34a94e694055afbe12cd7fb7978a-0.
INFO 03-01 23:36:16 [logger.py:42] Received request cmpl-329d0e327c45444d99330778f7c6c718-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:16 [async_llm.py:261] Added request cmpl-329d0e327c45444d99330778f7c6c718-0.
INFO 03-01 23:36:17 [logger.py:42] Received request cmpl-df96ace6618b4130bbed131ad62dd3fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:17 [async_llm.py:261] Added request cmpl-df96ace6618b4130bbed131ad62dd3fc-0.
INFO 03-01 23:36:19 [logger.py:42] Received request cmpl-5c43dd55b5624966b908278d3a33424c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:19 [async_llm.py:261] Added request cmpl-5c43dd55b5624966b908278d3a33424c-0.
INFO 03-01 23:36:20 [logger.py:42] Received request cmpl-25e73521d2644f6fb6229141ba105710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:20 [async_llm.py:261] Added request cmpl-25e73521d2644f6fb6229141ba105710-0.
INFO 03-01 23:36:21 [logger.py:42] Received request cmpl-3ba30598a3274756ad7ba361d1b43693-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:21 [async_llm.py:261] Added request cmpl-3ba30598a3274756ad7ba361d1b43693-0.
INFO 03-01 23:36:22 [logger.py:42] Received request cmpl-e295350da8c44cec8b7b8b063e53db71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:22 [async_llm.py:261] Added request cmpl-e295350da8c44cec8b7b8b063e53db71-0.
INFO 03-01 23:36:23 [logger.py:42] Received request cmpl-5ba6192fcf2645839b55015083a6d6b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:23 [async_llm.py:261] Added request cmpl-5ba6192fcf2645839b55015083a6d6b7-0.
INFO 03-01 23:36:24 [logger.py:42] Received request cmpl-ed2d8b31a5e14f97a5c27d6df0bfd471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:24 [async_llm.py:261] Added request cmpl-ed2d8b31a5e14f97a5c27d6df0bfd471-0.
INFO 03-01 23:36:25 [logger.py:42] Received request cmpl-5dd2a8cbcbde4c50a5cc9cc7e253b191-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:25 [async_llm.py:261] Added request cmpl-5dd2a8cbcbde4c50a5cc9cc7e253b191-0.
INFO 03-01 23:36:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:26 [logger.py:42] Received request cmpl-b60ec1218be24ad89d1721f339a3bc4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:26 [async_llm.py:261] Added request cmpl-b60ec1218be24ad89d1721f339a3bc4f-0.
INFO 03-01 23:36:27 [logger.py:42] Received request cmpl-2fc465a48bad4d968c07a1256f80b96d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:27 [async_llm.py:261] Added request cmpl-2fc465a48bad4d968c07a1256f80b96d-0.
INFO 03-01 23:36:28 [logger.py:42] Received request cmpl-b3c9faa0cabc42fb98fbc3ca52ff365e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:28 [async_llm.py:261] Added request cmpl-b3c9faa0cabc42fb98fbc3ca52ff365e-0.
INFO 03-01 23:36:30 [logger.py:42] Received request cmpl-28ad69043a724443aa1a808f84b37f0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:30 [async_llm.py:261] Added request cmpl-28ad69043a724443aa1a808f84b37f0f-0.
INFO 03-01 23:36:31 [logger.py:42] Received request cmpl-a9686e149fe848b3badbfe6730e2008d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:31 [async_llm.py:261] Added request cmpl-a9686e149fe848b3badbfe6730e2008d-0.
INFO 03-01 23:36:32 [logger.py:42] Received request cmpl-bebbf373c7214bf09482cb521073d8d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:32 [async_llm.py:261] Added request cmpl-bebbf373c7214bf09482cb521073d8d9-0.
INFO 03-01 23:36:33 [logger.py:42] Received request cmpl-478300319b71424d98310131c419b471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:33 [async_llm.py:261] Added request cmpl-478300319b71424d98310131c419b471-0.
INFO 03-01 23:36:34 [logger.py:42] Received request cmpl-53c6f83816944b39bbd485a361f068a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:34 [async_llm.py:261] Added request cmpl-53c6f83816944b39bbd485a361f068a8-0.
INFO 03-01 23:36:35 [logger.py:42] Received request cmpl-84895b7ed43846ee8b41c0d96d3ca471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:35 [async_llm.py:261] Added request cmpl-84895b7ed43846ee8b41c0d96d3ca471-0.
INFO 03-01 23:36:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:36 [logger.py:42] Received request cmpl-e3fb4b477a4842528fec63bd8644ae16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:36 [async_llm.py:261] Added request cmpl-e3fb4b477a4842528fec63bd8644ae16-0.
INFO 03-01 23:36:37 [logger.py:42] Received request cmpl-38525a24aba646199d532f369e54776c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:37 [async_llm.py:261] Added request cmpl-38525a24aba646199d532f369e54776c-0.
INFO 03-01 23:36:38 [logger.py:42] Received request cmpl-a3d488750f5b4109b24bad738273204b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:38 [async_llm.py:261] Added request cmpl-a3d488750f5b4109b24bad738273204b-0.
INFO 03-01 23:36:39 [logger.py:42] Received request cmpl-3b6206658bcf401d95cd71e6ea566c8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:39 [async_llm.py:261] Added request cmpl-3b6206658bcf401d95cd71e6ea566c8a-0.
INFO 03-01 23:36:40 [logger.py:42] Received request cmpl-c39187af003b49dd9f05aae0783af99a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:40 [async_llm.py:261] Added request cmpl-c39187af003b49dd9f05aae0783af99a-0.
INFO 03-01 23:36:42 [logger.py:42] Received request cmpl-624cfa6d28034dd6b174fe7698bd023e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:42 [async_llm.py:261] Added request cmpl-624cfa6d28034dd6b174fe7698bd023e-0.
INFO 03-01 23:36:43 [logger.py:42] Received request cmpl-fe94421d619b43998e92bb1c9f48902d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:43 [async_llm.py:261] Added request cmpl-fe94421d619b43998e92bb1c9f48902d-0.
INFO 03-01 23:36:44 [logger.py:42] Received request cmpl-cbee78e5615a4a2e925be06a8697fa85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:44 [async_llm.py:261] Added request cmpl-cbee78e5615a4a2e925be06a8697fa85-0.
INFO 03-01 23:36:45 [logger.py:42] Received request cmpl-a1f92ec911d04640a52e573d707f7a4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:45 [async_llm.py:261] Added request cmpl-a1f92ec911d04640a52e573d707f7a4b-0.
INFO 03-01 23:36:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:46 [logger.py:42] Received request cmpl-ab12302ad01242ee82edd40cd418f88f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:46 [async_llm.py:261] Added request cmpl-ab12302ad01242ee82edd40cd418f88f-0.
INFO 03-01 23:36:47 [logger.py:42] Received request cmpl-f3d291251b86483b93f543a218fffa97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:47 [async_llm.py:261] Added request cmpl-f3d291251b86483b93f543a218fffa97-0.
INFO 03-01 23:36:48 [logger.py:42] Received request cmpl-1ccac26ec5874caebacb9dd9fa19bbc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:48 [async_llm.py:261] Added request cmpl-1ccac26ec5874caebacb9dd9fa19bbc7-0.
INFO 03-01 23:36:49 [logger.py:42] Received request cmpl-cd1bb1e7c0a642449b344d1e0dea40e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:49 [async_llm.py:261] Added request cmpl-cd1bb1e7c0a642449b344d1e0dea40e9-0.
INFO 03-01 23:36:50 [logger.py:42] Received request cmpl-55a7fdccd6e94814923675fecbe881b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:50 [async_llm.py:261] Added request cmpl-55a7fdccd6e94814923675fecbe881b7-0.
INFO 03-01 23:36:51 [logger.py:42] Received request cmpl-5644326ad10b48d296c71eb9cb558ce6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:51 [async_llm.py:261] Added request cmpl-5644326ad10b48d296c71eb9cb558ce6-0.
INFO 03-01 23:36:53 [logger.py:42] Received request cmpl-8e92f2c5631d49f9a06ee79fac5a9182-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:53 [async_llm.py:261] Added request cmpl-8e92f2c5631d49f9a06ee79fac5a9182-0.
INFO 03-01 23:36:54 [logger.py:42] Received request cmpl-fe8d735b267c41d68a39c31b747db1e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:54 [async_llm.py:261] Added request cmpl-fe8d735b267c41d68a39c31b747db1e3-0.
INFO 03-01 23:36:55 [logger.py:42] Received request cmpl-b3e1e592822345c386427980e94993bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:55 [async_llm.py:261] Added request cmpl-b3e1e592822345c386427980e94993bf-0.
INFO 03-01 23:36:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:36:56 [logger.py:42] Received request cmpl-674c597aae5a48b7bcefd7914a23f8d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:56 [async_llm.py:261] Added request cmpl-674c597aae5a48b7bcefd7914a23f8d0-0.
INFO 03-01 23:36:57 [logger.py:42] Received request cmpl-34dadc74000849b8a7313c639f1e698c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:57 [async_llm.py:261] Added request cmpl-34dadc74000849b8a7313c639f1e698c-0.
INFO 03-01 23:36:58 [logger.py:42] Received request cmpl-9fdddf43b6c54c49a4605f576379c2f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:58 [async_llm.py:261] Added request cmpl-9fdddf43b6c54c49a4605f576379c2f3-0.
INFO 03-01 23:36:59 [logger.py:42] Received request cmpl-8bf03be05cc14628ae7d2d059caf14f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:36:59 [async_llm.py:261] Added request cmpl-8bf03be05cc14628ae7d2d059caf14f7-0.
INFO 03-01 23:37:00 [logger.py:42] Received request cmpl-b52aa196965e442da28ceb5e34200354-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:00 [async_llm.py:261] Added request cmpl-b52aa196965e442da28ceb5e34200354-0.
INFO 03-01 23:37:01 [logger.py:42] Received request cmpl-73d5197fd23f442d92ba7dc07bed0c0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:01 [async_llm.py:261] Added request cmpl-73d5197fd23f442d92ba7dc07bed0c0e-0.
INFO 03-01 23:37:02 [logger.py:42] Received request cmpl-2f94af8563684386be60afbc2483f539-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:02 [async_llm.py:261] Added request cmpl-2f94af8563684386be60afbc2483f539-0.
INFO 03-01 23:37:03 [logger.py:42] Received request cmpl-1b1d7adaa1764910900696053132f664-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:03 [async_llm.py:261] Added request cmpl-1b1d7adaa1764910900696053132f664-0.
INFO 03-01 23:37:05 [logger.py:42] Received request cmpl-be2d3265c85143ddb93bfaf431636396-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:05 [async_llm.py:261] Added request cmpl-be2d3265c85143ddb93bfaf431636396-0.
INFO 03-01 23:37:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:06 [logger.py:42] Received request cmpl-0b094314d8cb4ced85c37b8613c6eee4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:06 [async_llm.py:261] Added request cmpl-0b094314d8cb4ced85c37b8613c6eee4-0.
INFO 03-01 23:37:07 [logger.py:42] Received request cmpl-e73d4c2e32be44149d1b0ef6dfa5ce88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:07 [async_llm.py:261] Added request cmpl-e73d4c2e32be44149d1b0ef6dfa5ce88-0.
INFO 03-01 23:37:08 [logger.py:42] Received request cmpl-beb3afe722e742b991a16a2f21e3296b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:08 [async_llm.py:261] Added request cmpl-beb3afe722e742b991a16a2f21e3296b-0.
INFO 03-01 23:37:09 [logger.py:42] Received request cmpl-9925adf015e644ccaaa32b30380d7b83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:09 [async_llm.py:261] Added request cmpl-9925adf015e644ccaaa32b30380d7b83-0.
INFO 03-01 23:37:10 [logger.py:42] Received request cmpl-66edb1b9fdf84ba09b2251a68e56d56c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:10 [async_llm.py:261] Added request cmpl-66edb1b9fdf84ba09b2251a68e56d56c-0.
INFO 03-01 23:37:11 [logger.py:42] Received request cmpl-7a12bb7ee7944eb2802f6eca0f8d1b8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:11 [async_llm.py:261] Added request cmpl-7a12bb7ee7944eb2802f6eca0f8d1b8d-0.
INFO 03-01 23:37:12 [logger.py:42] Received request cmpl-30cf58dd54264836b8d27a14f47802d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:12 [async_llm.py:261] Added request cmpl-30cf58dd54264836b8d27a14f47802d6-0.
INFO 03-01 23:37:13 [logger.py:42] Received request cmpl-53c7c852106740019b0e1067d4e98469-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:13 [async_llm.py:261] Added request cmpl-53c7c852106740019b0e1067d4e98469-0.
INFO 03-01 23:37:14 [logger.py:42] Received request cmpl-7c477a1d8b3f4e8084ea9fd2878c3216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:14 [async_llm.py:261] Added request cmpl-7c477a1d8b3f4e8084ea9fd2878c3216-0.
INFO 03-01 23:37:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:15 [logger.py:42] Received request cmpl-919e3208ac404b0eb2744f71fec7c6c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:15 [async_llm.py:261] Added request cmpl-919e3208ac404b0eb2744f71fec7c6c2-0.
INFO 03-01 23:37:17 [logger.py:42] Received request cmpl-b3c0f74dd3b14552870d98cc573d3806-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:17 [async_llm.py:261] Added request cmpl-b3c0f74dd3b14552870d98cc573d3806-0.
INFO 03-01 23:37:18 [logger.py:42] Received request cmpl-4fcdead60bd74cc6bf37468f6b229ddc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:18 [async_llm.py:261] Added request cmpl-4fcdead60bd74cc6bf37468f6b229ddc-0.
INFO 03-01 23:37:19 [logger.py:42] Received request cmpl-d19b19255f174d57bf7d7704c4bb4533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:19 [async_llm.py:261] Added request cmpl-d19b19255f174d57bf7d7704c4bb4533-0.
INFO 03-01 23:37:20 [logger.py:42] Received request cmpl-25ed38242d7b42c696a430ef78fe613c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:20 [async_llm.py:261] Added request cmpl-25ed38242d7b42c696a430ef78fe613c-0.
INFO 03-01 23:37:21 [logger.py:42] Received request cmpl-59e96b8db6bd4a8eb71478b45f599e42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:21 [async_llm.py:261] Added request cmpl-59e96b8db6bd4a8eb71478b45f599e42-0.
INFO 03-01 23:37:22 [logger.py:42] Received request cmpl-bff9c2188fbf42bbae1b613990b26545-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:22 [async_llm.py:261] Added request cmpl-bff9c2188fbf42bbae1b613990b26545-0.
INFO 03-01 23:37:23 [logger.py:42] Received request cmpl-5dd3b523eb8747009b9e2e6734c4ba56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:23 [async_llm.py:261] Added request cmpl-5dd3b523eb8747009b9e2e6734c4ba56-0.
INFO 03-01 23:37:24 [logger.py:42] Received request cmpl-8e9b2903a7d94364b0aaa769c6750307-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:24 [async_llm.py:261] Added request cmpl-8e9b2903a7d94364b0aaa769c6750307-0.
INFO 03-01 23:37:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:25 [logger.py:42] Received request cmpl-90cb7746f9664c188f5f515fc9ba543f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:25 [async_llm.py:261] Added request cmpl-90cb7746f9664c188f5f515fc9ba543f-0.
INFO 03-01 23:37:26 [logger.py:42] Received request cmpl-096b729e6f284c9c8171339e8d93dfb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:26 [async_llm.py:261] Added request cmpl-096b729e6f284c9c8171339e8d93dfb2-0.
INFO 03-01 23:37:28 [logger.py:42] Received request cmpl-1ca75b94b0ea4832a06c5f09c8036e75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:28 [async_llm.py:261] Added request cmpl-1ca75b94b0ea4832a06c5f09c8036e75-0.
INFO 03-01 23:37:29 [logger.py:42] Received request cmpl-4ab4126452144f2ab08a23ea99c74f9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:29 [async_llm.py:261] Added request cmpl-4ab4126452144f2ab08a23ea99c74f9c-0.
INFO 03-01 23:37:30 [logger.py:42] Received request cmpl-8473e8c90f3f48e5a2034c93112d500f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:30 [async_llm.py:261] Added request cmpl-8473e8c90f3f48e5a2034c93112d500f-0.
INFO 03-01 23:37:31 [logger.py:42] Received request cmpl-0d744cd452384654a248928a60f7c679-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:31 [async_llm.py:261] Added request cmpl-0d744cd452384654a248928a60f7c679-0.
INFO 03-01 23:37:32 [logger.py:42] Received request cmpl-906a23c1ef4145d3a38ad6cf6034e4dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:32 [async_llm.py:261] Added request cmpl-906a23c1ef4145d3a38ad6cf6034e4dc-0.
INFO 03-01 23:37:33 [logger.py:42] Received request cmpl-da582cb317a443f8824442fa465cce22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:33 [async_llm.py:261] Added request cmpl-da582cb317a443f8824442fa465cce22-0.
INFO 03-01 23:37:34 [logger.py:42] Received request cmpl-5c9279911ac940a188c93e9e2f7f611a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:34 [async_llm.py:261] Added request cmpl-5c9279911ac940a188c93e9e2f7f611a-0.
INFO 03-01 23:37:35 [logger.py:42] Received request cmpl-66e313f43939468e903c49d9616dceac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:35 [async_llm.py:261] Added request cmpl-66e313f43939468e903c49d9616dceac-0.
INFO 03-01 23:37:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:36 [logger.py:42] Received request cmpl-c7dd35ff5341449da8af63d59bc7fe26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:36 [async_llm.py:261] Added request cmpl-c7dd35ff5341449da8af63d59bc7fe26-0.
INFO 03-01 23:37:37 [logger.py:42] Received request cmpl-f99be524beac41ee826afc2331a40dea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:37 [async_llm.py:261] Added request cmpl-f99be524beac41ee826afc2331a40dea-0.
INFO 03-01 23:37:38 [logger.py:42] Received request cmpl-3573f5ebd4c04bf29a3a4e2ccd20df18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:38 [async_llm.py:261] Added request cmpl-3573f5ebd4c04bf29a3a4e2ccd20df18-0.
INFO 03-01 23:37:40 [logger.py:42] Received request cmpl-3a36ad7e9c764966ad843d6448a829ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:40 [async_llm.py:261] Added request cmpl-3a36ad7e9c764966ad843d6448a829ca-0.
INFO 03-01 23:37:41 [logger.py:42] Received request cmpl-c3683e25205f40c8bc01ecda3faa765d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:41 [async_llm.py:261] Added request cmpl-c3683e25205f40c8bc01ecda3faa765d-0.
INFO 03-01 23:37:42 [logger.py:42] Received request cmpl-ffdac71392994b6589fbb1be330112ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:42 [async_llm.py:261] Added request cmpl-ffdac71392994b6589fbb1be330112ad-0.
INFO 03-01 23:37:43 [logger.py:42] Received request cmpl-c65ae6e6b6dc4fe98f11358447fb0890-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:43 [async_llm.py:261] Added request cmpl-c65ae6e6b6dc4fe98f11358447fb0890-0.
INFO 03-01 23:37:44 [logger.py:42] Received request cmpl-52107dc9f290429e98e3662bbe0ecbeb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:44 [async_llm.py:261] Added request cmpl-52107dc9f290429e98e3662bbe0ecbeb-0.
INFO 03-01 23:37:45 [logger.py:42] Received request cmpl-b011ff2cca5342aea7a85f914282a8a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:45 [async_llm.py:261] Added request cmpl-b011ff2cca5342aea7a85f914282a8a4-0.
INFO 03-01 23:37:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:46 [logger.py:42] Received request cmpl-ee9b0e8a1610487788843d976418da1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:46 [async_llm.py:261] Added request cmpl-ee9b0e8a1610487788843d976418da1a-0.
INFO 03-01 23:37:47 [logger.py:42] Received request cmpl-29e61f536aee4b468ad676f68ecc7673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:47 [async_llm.py:261] Added request cmpl-29e61f536aee4b468ad676f68ecc7673-0.
INFO 03-01 23:37:48 [logger.py:42] Received request cmpl-277283092c3e403a92cb5d04d07a2c99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:48 [async_llm.py:261] Added request cmpl-277283092c3e403a92cb5d04d07a2c99-0.
INFO 03-01 23:37:49 [logger.py:42] Received request cmpl-5d74459bd32c4212aa3b75841581bc57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:49 [async_llm.py:261] Added request cmpl-5d74459bd32c4212aa3b75841581bc57-0.
INFO 03-01 23:37:50 [logger.py:42] Received request cmpl-8fff9213285544178d8b42f5e457c500-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:50 [async_llm.py:261] Added request cmpl-8fff9213285544178d8b42f5e457c500-0.
INFO 03-01 23:37:52 [logger.py:42] Received request cmpl-efa55edf37c34f9f8e62abbbf2ddbedd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:52 [async_llm.py:261] Added request cmpl-efa55edf37c34f9f8e62abbbf2ddbedd-0.
INFO 03-01 23:37:53 [logger.py:42] Received request cmpl-2cc5d464619240aaa42ac6a93f8d9fb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:53 [async_llm.py:261] Added request cmpl-2cc5d464619240aaa42ac6a93f8d9fb7-0.
INFO 03-01 23:37:54 [logger.py:42] Received request cmpl-0fc33ae847e54930ae9810942d76ee79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:54 [async_llm.py:261] Added request cmpl-0fc33ae847e54930ae9810942d76ee79-0.
INFO 03-01 23:37:55 [logger.py:42] Received request cmpl-766fd71e085341f9beab9b6182d85f94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:55 [async_llm.py:261] Added request cmpl-766fd71e085341f9beab9b6182d85f94-0.
INFO 03-01 23:37:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:37:56 [logger.py:42] Received request cmpl-9900aeeff38c40518fe45b78246bc8ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:56 [async_llm.py:261] Added request cmpl-9900aeeff38c40518fe45b78246bc8ca-0.
INFO 03-01 23:37:57 [logger.py:42] Received request cmpl-8c10a5a3792e4c059f0d9ec4fc28ade6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:57 [async_llm.py:261] Added request cmpl-8c10a5a3792e4c059f0d9ec4fc28ade6-0.
INFO 03-01 23:37:58 [logger.py:42] Received request cmpl-e90bc874a6124a16a6b5ddefeafb8ef2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:58 [async_llm.py:261] Added request cmpl-e90bc874a6124a16a6b5ddefeafb8ef2-0.
INFO 03-01 23:37:59 [logger.py:42] Received request cmpl-982dbcaf9c2d458898e75b2a5e5da436-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:37:59 [async_llm.py:261] Added request cmpl-982dbcaf9c2d458898e75b2a5e5da436-0.
INFO 03-01 23:38:00 [logger.py:42] Received request cmpl-2074642a27ff455693f4498116380b5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:00 [async_llm.py:261] Added request cmpl-2074642a27ff455693f4498116380b5f-0.
INFO 03-01 23:38:01 [logger.py:42] Received request cmpl-31d1f923d60d4974b6e22776bd5098a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:01 [async_llm.py:261] Added request cmpl-31d1f923d60d4974b6e22776bd5098a3-0.
INFO 03-01 23:38:02 [logger.py:42] Received request cmpl-34155561cd3b41c9ad5575999d4bf08d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:02 [async_llm.py:261] Added request cmpl-34155561cd3b41c9ad5575999d4bf08d-0.
INFO 03-01 23:38:04 [logger.py:42] Received request cmpl-a3eff671b5b9403598a69add65e7e504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:04 [async_llm.py:261] Added request cmpl-a3eff671b5b9403598a69add65e7e504-0.
INFO 03-01 23:38:05 [logger.py:42] Received request cmpl-f8e446c996c44c589ef925d40804b832-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:05 [async_llm.py:261] Added request cmpl-f8e446c996c44c589ef925d40804b832-0.
INFO 03-01 23:38:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:06 [logger.py:42] Received request cmpl-f080b96b6f474c03bd270bba8f233c53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:06 [async_llm.py:261] Added request cmpl-f080b96b6f474c03bd270bba8f233c53-0.
INFO 03-01 23:38:07 [logger.py:42] Received request cmpl-2cac1366187747d1a55c09d302945efe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:07 [async_llm.py:261] Added request cmpl-2cac1366187747d1a55c09d302945efe-0.
INFO 03-01 23:38:08 [logger.py:42] Received request cmpl-7597d0867ccb43378a2dc4ec4b8bfa58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:08 [async_llm.py:261] Added request cmpl-7597d0867ccb43378a2dc4ec4b8bfa58-0.
INFO 03-01 23:38:09 [logger.py:42] Received request cmpl-b2cb692e391b4704b40611dd0539343f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:09 [async_llm.py:261] Added request cmpl-b2cb692e391b4704b40611dd0539343f-0.
INFO 03-01 23:38:10 [logger.py:42] Received request cmpl-7c457bf58bbb49879b2444f71c38c383-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:10 [async_llm.py:261] Added request cmpl-7c457bf58bbb49879b2444f71c38c383-0.
INFO 03-01 23:38:11 [logger.py:42] Received request cmpl-0b0e1a7b3b2f4a26a125da5137ecf36f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:11 [async_llm.py:261] Added request cmpl-0b0e1a7b3b2f4a26a125da5137ecf36f-0.
INFO 03-01 23:38:12 [logger.py:42] Received request cmpl-3648c2ace07647e2b4f1c94205ff609b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:12 [async_llm.py:261] Added request cmpl-3648c2ace07647e2b4f1c94205ff609b-0.
INFO 03-01 23:38:13 [logger.py:42] Received request cmpl-a5a6ed7ebfdc44ad8b4811095026181c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:13 [async_llm.py:261] Added request cmpl-a5a6ed7ebfdc44ad8b4811095026181c-0.
INFO 03-01 23:38:15 [logger.py:42] Received request cmpl-a1cbb720f45640718bb57fd4a2db3059-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:15 [async_llm.py:261] Added request cmpl-a1cbb720f45640718bb57fd4a2db3059-0.
INFO 03-01 23:38:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:16 [logger.py:42] Received request cmpl-a9ef1c2588b34ab5a90877317545af22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:16 [async_llm.py:261] Added request cmpl-a9ef1c2588b34ab5a90877317545af22-0.
INFO 03-01 23:38:17 [logger.py:42] Received request cmpl-58a3d8989890456390d0aef5037dca9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:17 [async_llm.py:261] Added request cmpl-58a3d8989890456390d0aef5037dca9f-0.
INFO 03-01 23:38:18 [logger.py:42] Received request cmpl-6974cc834aa946cba3c3d8c19e9a8a6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:18 [async_llm.py:261] Added request cmpl-6974cc834aa946cba3c3d8c19e9a8a6e-0.
INFO 03-01 23:38:19 [logger.py:42] Received request cmpl-e93cf1ded62c4463a1b025ceb0b846f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:19 [async_llm.py:261] Added request cmpl-e93cf1ded62c4463a1b025ceb0b846f0-0.
INFO 03-01 23:38:20 [logger.py:42] Received request cmpl-20d78bb4912742e3a55931400931b1c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:20 [async_llm.py:261] Added request cmpl-20d78bb4912742e3a55931400931b1c4-0.
INFO 03-01 23:38:21 [logger.py:42] Received request cmpl-6e9d10536c73422c8780433153b4b2b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:21 [async_llm.py:261] Added request cmpl-6e9d10536c73422c8780433153b4b2b6-0.
INFO 03-01 23:38:22 [logger.py:42] Received request cmpl-fe4314b984b14abe8d1610fe6f1e062c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:22 [async_llm.py:261] Added request cmpl-fe4314b984b14abe8d1610fe6f1e062c-0.
INFO 03-01 23:38:23 [logger.py:42] Received request cmpl-b92ae21989104b1896632ddff39a02ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:23 [async_llm.py:261] Added request cmpl-b92ae21989104b1896632ddff39a02ac-0.
INFO 03-01 23:38:24 [logger.py:42] Received request cmpl-3b9fb4631b31484891a4c9546b6b013e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:24 [async_llm.py:261] Added request cmpl-3b9fb4631b31484891a4c9546b6b013e-0.
INFO 03-01 23:38:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:25 [logger.py:42] Received request cmpl-38fd7e0c41ee4247a2cd74baf96b4af3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:25 [async_llm.py:261] Added request cmpl-38fd7e0c41ee4247a2cd74baf96b4af3-0.
INFO 03-01 23:38:27 [logger.py:42] Received request cmpl-7370c79cf7a742fca6fbb39cf5f40b66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:27 [async_llm.py:261] Added request cmpl-7370c79cf7a742fca6fbb39cf5f40b66-0.
INFO 03-01 23:38:28 [logger.py:42] Received request cmpl-8f40d3f6b4ae48f6828ecef4ce6ab6d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:28 [async_llm.py:261] Added request cmpl-8f40d3f6b4ae48f6828ecef4ce6ab6d7-0.
INFO 03-01 23:38:29 [logger.py:42] Received request cmpl-0ecfb3dfee284c0982bbeb92057e95dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:29 [async_llm.py:261] Added request cmpl-0ecfb3dfee284c0982bbeb92057e95dc-0.
INFO 03-01 23:38:30 [logger.py:42] Received request cmpl-d3f3803367dd4357ad80b4a06bfad14e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:30 [async_llm.py:261] Added request cmpl-d3f3803367dd4357ad80b4a06bfad14e-0.
INFO 03-01 23:38:31 [logger.py:42] Received request cmpl-45beaba2a5114184b84e4fdc799f50a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:31 [async_llm.py:261] Added request cmpl-45beaba2a5114184b84e4fdc799f50a3-0.
INFO 03-01 23:38:32 [logger.py:42] Received request cmpl-5eaef4f84cdc4cb5a85fd356830169a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:32 [async_llm.py:261] Added request cmpl-5eaef4f84cdc4cb5a85fd356830169a1-0.
INFO 03-01 23:38:33 [logger.py:42] Received request cmpl-1810be22f26c49238c8abccb7bd8c3f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:33 [async_llm.py:261] Added request cmpl-1810be22f26c49238c8abccb7bd8c3f3-0.
INFO 03-01 23:38:34 [logger.py:42] Received request cmpl-04f923d8d2204842b2fd0d72a8859245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:34 [async_llm.py:261] Added request cmpl-04f923d8d2204842b2fd0d72a8859245-0.
INFO 03-01 23:38:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:35 [logger.py:42] Received request cmpl-2e8d75a774364b83b8b5b7b3426ae18b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:35 [async_llm.py:261] Added request cmpl-2e8d75a774364b83b8b5b7b3426ae18b-0.
INFO 03-01 23:38:36 [logger.py:42] Received request cmpl-f2705fe8b8f24d27b995f87ddeb8cf47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:36 [async_llm.py:261] Added request cmpl-f2705fe8b8f24d27b995f87ddeb8cf47-0.
INFO 03-01 23:38:37 [logger.py:42] Received request cmpl-70ea2525d82e47b0be108f7c372360bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:37 [async_llm.py:261] Added request cmpl-70ea2525d82e47b0be108f7c372360bc-0.
INFO 03-01 23:38:39 [logger.py:42] Received request cmpl-4aa23bbd331e45559e9266f2c0d3127a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:39 [async_llm.py:261] Added request cmpl-4aa23bbd331e45559e9266f2c0d3127a-0.
INFO 03-01 23:38:40 [logger.py:42] Received request cmpl-80ec1bc8ded541e59c4d4867ef80a7da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:40 [async_llm.py:261] Added request cmpl-80ec1bc8ded541e59c4d4867ef80a7da-0.
INFO 03-01 23:38:41 [logger.py:42] Received request cmpl-8443de86434b4e64ae2318b718524743-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:41 [async_llm.py:261] Added request cmpl-8443de86434b4e64ae2318b718524743-0.
INFO 03-01 23:38:42 [logger.py:42] Received request cmpl-60777e7aed1749c79bea5226e816a564-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:42 [async_llm.py:261] Added request cmpl-60777e7aed1749c79bea5226e816a564-0.
INFO 03-01 23:38:43 [logger.py:42] Received request cmpl-e109c3074c644e52add95fae3068ae3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:43 [async_llm.py:261] Added request cmpl-e109c3074c644e52add95fae3068ae3a-0.
INFO 03-01 23:38:44 [logger.py:42] Received request cmpl-a3fd7d5247054df8be748f502ed18155-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:44 [async_llm.py:261] Added request cmpl-a3fd7d5247054df8be748f502ed18155-0.
INFO 03-01 23:38:45 [logger.py:42] Received request cmpl-045e350c4ed3429da3b3c6996d15e15d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:45 [async_llm.py:261] Added request cmpl-045e350c4ed3429da3b3c6996d15e15d-0.
INFO 03-01 23:38:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:46 [logger.py:42] Received request cmpl-343e0b37081347b8836ff64be0a8e227-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:46 [async_llm.py:261] Added request cmpl-343e0b37081347b8836ff64be0a8e227-0.
INFO 03-01 23:38:47 [logger.py:42] Received request cmpl-16cbc2878c1c4838a4d299cc9c85c40e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:47 [async_llm.py:261] Added request cmpl-16cbc2878c1c4838a4d299cc9c85c40e-0.
INFO 03-01 23:38:48 [logger.py:42] Received request cmpl-1847fb49b6084dac90af1fbbcfed943b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:48 [async_llm.py:261] Added request cmpl-1847fb49b6084dac90af1fbbcfed943b-0.
INFO 03-01 23:38:49 [logger.py:42] Received request cmpl-9c1f1cf24c8f43b482425d9390251e28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:49 [async_llm.py:261] Added request cmpl-9c1f1cf24c8f43b482425d9390251e28-0.
INFO 03-01 23:38:51 [logger.py:42] Received request cmpl-ec25268ae9c54d5db0dde8fd0dfb52b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:51 [async_llm.py:261] Added request cmpl-ec25268ae9c54d5db0dde8fd0dfb52b5-0.
INFO 03-01 23:38:52 [logger.py:42] Received request cmpl-6d31efbfa2624105b5b61dcacd29d84b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:52 [async_llm.py:261] Added request cmpl-6d31efbfa2624105b5b61dcacd29d84b-0.
INFO 03-01 23:38:53 [logger.py:42] Received request cmpl-9cfe0f5842af494892f4ecf53470b55a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:53 [async_llm.py:261] Added request cmpl-9cfe0f5842af494892f4ecf53470b55a-0.
INFO 03-01 23:38:54 [logger.py:42] Received request cmpl-38f6de0e781544e7b4b29ca4de5c3733-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:54 [async_llm.py:261] Added request cmpl-38f6de0e781544e7b4b29ca4de5c3733-0.
INFO 03-01 23:38:55 [logger.py:42] Received request cmpl-2a4008cf72d140a8aaafbb414e8bbc66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:55 [async_llm.py:261] Added request cmpl-2a4008cf72d140a8aaafbb414e8bbc66-0.
INFO 03-01 23:38:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:38:56 [logger.py:42] Received request cmpl-0750ddd170c5467292e01da9322d95e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:56 [async_llm.py:261] Added request cmpl-0750ddd170c5467292e01da9322d95e6-0.
INFO 03-01 23:38:57 [logger.py:42] Received request cmpl-9829e122b7724d5f88ea9f73f04c61e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:57 [async_llm.py:261] Added request cmpl-9829e122b7724d5f88ea9f73f04c61e5-0.
INFO 03-01 23:38:58 [logger.py:42] Received request cmpl-79bb61c658d648efa8e85f855ca2bca5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:58 [async_llm.py:261] Added request cmpl-79bb61c658d648efa8e85f855ca2bca5-0.
INFO 03-01 23:38:59 [logger.py:42] Received request cmpl-bb6d5a28302b47bab3553e47172362c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:38:59 [async_llm.py:261] Added request cmpl-bb6d5a28302b47bab3553e47172362c2-0.
INFO 03-01 23:39:00 [logger.py:42] Received request cmpl-fab8a12ef6564c46961f9984b5e31f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:00 [async_llm.py:261] Added request cmpl-fab8a12ef6564c46961f9984b5e31f25-0.
INFO 03-01 23:39:01 [logger.py:42] Received request cmpl-c1834996d8bb4836b40a9c616b862737-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:01 [async_llm.py:261] Added request cmpl-c1834996d8bb4836b40a9c616b862737-0.
INFO 03-01 23:39:03 [logger.py:42] Received request cmpl-d9c89ee73359456dbb76dcb89a5d0250-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:03 [async_llm.py:261] Added request cmpl-d9c89ee73359456dbb76dcb89a5d0250-0.
INFO 03-01 23:39:04 [logger.py:42] Received request cmpl-ca739f7db98142d1897a0256d25cfc5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:04 [async_llm.py:261] Added request cmpl-ca739f7db98142d1897a0256d25cfc5e-0.
INFO 03-01 23:39:05 [logger.py:42] Received request cmpl-f88ddc5761894f7f8f6a555844a7998e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:05 [async_llm.py:261] Added request cmpl-f88ddc5761894f7f8f6a555844a7998e-0.
INFO 03-01 23:39:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:06 [logger.py:42] Received request cmpl-7475b1db7ea74752ba998e8cd4a7892e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:06 [async_llm.py:261] Added request cmpl-7475b1db7ea74752ba998e8cd4a7892e-0.
INFO 03-01 23:39:07 [logger.py:42] Received request cmpl-c0b10bf83c2c4eaea1520697bbae9599-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:07 [async_llm.py:261] Added request cmpl-c0b10bf83c2c4eaea1520697bbae9599-0.
INFO 03-01 23:39:08 [logger.py:42] Received request cmpl-465903b4c11741228283e1e1911e1ab3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:08 [async_llm.py:261] Added request cmpl-465903b4c11741228283e1e1911e1ab3-0.
INFO 03-01 23:39:09 [logger.py:42] Received request cmpl-04b05afa6c714f0cadbf4218fe5030fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:09 [async_llm.py:261] Added request cmpl-04b05afa6c714f0cadbf4218fe5030fb-0.
INFO 03-01 23:39:10 [logger.py:42] Received request cmpl-f72f284d41d0436ab364bae97ed75089-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:10 [async_llm.py:261] Added request cmpl-f72f284d41d0436ab364bae97ed75089-0.
INFO 03-01 23:39:11 [logger.py:42] Received request cmpl-a969c837bb6e42adae455b47d515d4d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:11 [async_llm.py:261] Added request cmpl-a969c837bb6e42adae455b47d515d4d4-0.
INFO 03-01 23:39:12 [logger.py:42] Received request cmpl-cce2c7ef4cf644dcab89746e7d4a3249-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:12 [async_llm.py:261] Added request cmpl-cce2c7ef4cf644dcab89746e7d4a3249-0.
INFO 03-01 23:39:14 [logger.py:42] Received request cmpl-07aa9e0e67ca4ef88d12a2295068d7bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:14 [async_llm.py:261] Added request cmpl-07aa9e0e67ca4ef88d12a2295068d7bb-0.
INFO 03-01 23:39:15 [logger.py:42] Received request cmpl-79ed7050e3da45f6966aad44b4e9422a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:15 [async_llm.py:261] Added request cmpl-79ed7050e3da45f6966aad44b4e9422a-0.
INFO 03-01 23:39:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:16 [logger.py:42] Received request cmpl-dc6a8dd35dc441739277c8db1fe60b6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:16 [async_llm.py:261] Added request cmpl-dc6a8dd35dc441739277c8db1fe60b6c-0.
INFO 03-01 23:39:17 [logger.py:42] Received request cmpl-52f8f5977ee846b5bf3b72bc1b689a4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:17 [async_llm.py:261] Added request cmpl-52f8f5977ee846b5bf3b72bc1b689a4d-0.
INFO 03-01 23:39:18 [logger.py:42] Received request cmpl-644415f1825c45d48d9d2475a6acb8cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:18 [async_llm.py:261] Added request cmpl-644415f1825c45d48d9d2475a6acb8cc-0.
INFO 03-01 23:39:19 [logger.py:42] Received request cmpl-30e708267aa242918aca0b777b9a99c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:19 [async_llm.py:261] Added request cmpl-30e708267aa242918aca0b777b9a99c1-0.
INFO 03-01 23:39:20 [logger.py:42] Received request cmpl-7571b25f8c094bdbadc4c2b4e9e8a448-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:20 [async_llm.py:261] Added request cmpl-7571b25f8c094bdbadc4c2b4e9e8a448-0.
INFO 03-01 23:39:21 [logger.py:42] Received request cmpl-7eadfa8633944326a9747fa6aa61f98c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:21 [async_llm.py:261] Added request cmpl-7eadfa8633944326a9747fa6aa61f98c-0.
INFO 03-01 23:39:22 [logger.py:42] Received request cmpl-51fde05c42114f81880a9bcba97552ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:22 [async_llm.py:261] Added request cmpl-51fde05c42114f81880a9bcba97552ce-0.
INFO 03-01 23:39:23 [logger.py:42] Received request cmpl-ab2e64e254324c2eb84229d0bb7c2a28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:23 [async_llm.py:261] Added request cmpl-ab2e64e254324c2eb84229d0bb7c2a28-0.
INFO 03-01 23:39:24 [logger.py:42] Received request cmpl-ad326a9f69984071ab50b4c8539e33f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:24 [async_llm.py:261] Added request cmpl-ad326a9f69984071ab50b4c8539e33f2-0.
INFO 03-01 23:39:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:26 [logger.py:42] Received request cmpl-83d8462c587e470cb5449108e59a5582-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:26 [async_llm.py:261] Added request cmpl-83d8462c587e470cb5449108e59a5582-0.
INFO 03-01 23:39:27 [logger.py:42] Received request cmpl-63f8d480f0e749f589eb660d2e5f8c03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:27 [async_llm.py:261] Added request cmpl-63f8d480f0e749f589eb660d2e5f8c03-0.
INFO 03-01 23:39:28 [logger.py:42] Received request cmpl-34c8e159e76c47ce950a350ea8ee1e09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:28 [async_llm.py:261] Added request cmpl-34c8e159e76c47ce950a350ea8ee1e09-0.
INFO 03-01 23:39:29 [logger.py:42] Received request cmpl-654d8e0317db42f491ea938d42e285e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:29 [async_llm.py:261] Added request cmpl-654d8e0317db42f491ea938d42e285e1-0.
INFO 03-01 23:39:30 [logger.py:42] Received request cmpl-e4af8c3ca07b4890ba62cd6d525c2fb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:30 [async_llm.py:261] Added request cmpl-e4af8c3ca07b4890ba62cd6d525c2fb7-0.
INFO 03-01 23:39:31 [logger.py:42] Received request cmpl-718f83ddd8e24a048620a798eff4f82e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:31 [async_llm.py:261] Added request cmpl-718f83ddd8e24a048620a798eff4f82e-0.
INFO 03-01 23:39:32 [logger.py:42] Received request cmpl-de41526228274de79f0dbc048d3c480d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:32 [async_llm.py:261] Added request cmpl-de41526228274de79f0dbc048d3c480d-0.
INFO 03-01 23:39:33 [logger.py:42] Received request cmpl-f554dfb293124c36b3770a1c50e79811-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:33 [async_llm.py:261] Added request cmpl-f554dfb293124c36b3770a1c50e79811-0.
INFO 03-01 23:39:34 [logger.py:42] Received request cmpl-3aced4d8e0e64640b5ab58874ddfa952-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:34 [async_llm.py:261] Added request cmpl-3aced4d8e0e64640b5ab58874ddfa952-0.
INFO 03-01 23:39:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:35 [logger.py:42] Received request cmpl-3a75b904ab8549b888aa2dd4e885ecec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:35 [async_llm.py:261] Added request cmpl-3a75b904ab8549b888aa2dd4e885ecec-0.
INFO 03-01 23:39:36 [logger.py:42] Received request cmpl-0191929358534e45b428dea3ac25bbee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:36 [async_llm.py:261] Added request cmpl-0191929358534e45b428dea3ac25bbee-0.
INFO 03-01 23:39:38 [logger.py:42] Received request cmpl-680f022f287f4f52912575b625c1b3e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:38 [async_llm.py:261] Added request cmpl-680f022f287f4f52912575b625c1b3e1-0.
INFO 03-01 23:39:39 [logger.py:42] Received request cmpl-66d2931031e743fdb1c8ff65df9bcd61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:39 [async_llm.py:261] Added request cmpl-66d2931031e743fdb1c8ff65df9bcd61-0.
INFO 03-01 23:39:40 [logger.py:42] Received request cmpl-46a2ef864062454bbf5c30be76a91d84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:40 [async_llm.py:261] Added request cmpl-46a2ef864062454bbf5c30be76a91d84-0.
INFO 03-01 23:39:41 [logger.py:42] Received request cmpl-0fd663cacb134ebca36ac6a429a937ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:41 [async_llm.py:261] Added request cmpl-0fd663cacb134ebca36ac6a429a937ec-0.
INFO 03-01 23:39:42 [logger.py:42] Received request cmpl-61508454181e493b9563d6eb8e635069-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:42 [async_llm.py:261] Added request cmpl-61508454181e493b9563d6eb8e635069-0.
INFO 03-01 23:39:43 [logger.py:42] Received request cmpl-291883bb46834ac1b264bd87c86135d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:43 [async_llm.py:261] Added request cmpl-291883bb46834ac1b264bd87c86135d7-0.
INFO 03-01 23:39:44 [logger.py:42] Received request cmpl-674da6ec56e64d04827debba1205ec84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:44 [async_llm.py:261] Added request cmpl-674da6ec56e64d04827debba1205ec84-0.
INFO 03-01 23:39:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:45 [logger.py:42] Received request cmpl-b6c9d40a725d4f5b80c40e5d0e405e1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:45 [async_llm.py:261] Added request cmpl-b6c9d40a725d4f5b80c40e5d0e405e1f-0.
INFO 03-01 23:39:46 [logger.py:42] Received request cmpl-23e8820959ed424281811fc5fc4cd61a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:46 [async_llm.py:261] Added request cmpl-23e8820959ed424281811fc5fc4cd61a-0.
INFO 03-01 23:39:47 [logger.py:42] Received request cmpl-2c2e1dfd41cd4aa6bf69f23bd531cb78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:47 [async_llm.py:261] Added request cmpl-2c2e1dfd41cd4aa6bf69f23bd531cb78-0.
INFO 03-01 23:39:49 [logger.py:42] Received request cmpl-24eb7243e33647e098408770b90e0f3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:49 [async_llm.py:261] Added request cmpl-24eb7243e33647e098408770b90e0f3a-0.
INFO 03-01 23:39:50 [logger.py:42] Received request cmpl-9701626500154fbaa7380f1e7d0c8fde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:50 [async_llm.py:261] Added request cmpl-9701626500154fbaa7380f1e7d0c8fde-0.
INFO 03-01 23:39:51 [logger.py:42] Received request cmpl-a74ae779375941ada7a6f345e59c6c60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:51 [async_llm.py:261] Added request cmpl-a74ae779375941ada7a6f345e59c6c60-0.
INFO 03-01 23:39:52 [logger.py:42] Received request cmpl-211d7099f7d443d0b3b01e5ad41c93ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:52 [async_llm.py:261] Added request cmpl-211d7099f7d443d0b3b01e5ad41c93ba-0.
INFO 03-01 23:39:53 [logger.py:42] Received request cmpl-b637d69157d04781ad77ed3c17833af9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:53 [async_llm.py:261] Added request cmpl-b637d69157d04781ad77ed3c17833af9-0.
INFO 03-01 23:39:54 [logger.py:42] Received request cmpl-e35c251581944c7584b153856b534a3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:54 [async_llm.py:261] Added request cmpl-e35c251581944c7584b153856b534a3c-0.
INFO 03-01 23:39:55 [logger.py:42] Received request cmpl-a1930168d32c4752a3f6673736b37430-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:55 [async_llm.py:261] Added request cmpl-a1930168d32c4752a3f6673736b37430-0.
INFO 03-01 23:39:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:39:56 [logger.py:42] Received request cmpl-3694554a45154c59944d857e66132bbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:56 [async_llm.py:261] Added request cmpl-3694554a45154c59944d857e66132bbe-0.
INFO 03-01 23:39:57 [logger.py:42] Received request cmpl-f64368c1de1b4c40b5fea2656ba9b166-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:57 [async_llm.py:261] Added request cmpl-f64368c1de1b4c40b5fea2656ba9b166-0.
INFO 03-01 23:39:58 [logger.py:42] Received request cmpl-1dfa877faa4c494683b3396ea0b07cc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:58 [async_llm.py:261] Added request cmpl-1dfa877faa4c494683b3396ea0b07cc8-0.
INFO 03-01 23:39:59 [logger.py:42] Received request cmpl-0f80cea68ff040e9ac9e914011f3fc72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:39:59 [async_llm.py:261] Added request cmpl-0f80cea68ff040e9ac9e914011f3fc72-0.
INFO 03-01 23:40:01 [logger.py:42] Received request cmpl-028327a9996b46ccb29d055de6f99699-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:01 [async_llm.py:261] Added request cmpl-028327a9996b46ccb29d055de6f99699-0.
INFO 03-01 23:40:02 [logger.py:42] Received request cmpl-e8de219512d0492cb3060cff0f85d112-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:02 [async_llm.py:261] Added request cmpl-e8de219512d0492cb3060cff0f85d112-0.
INFO 03-01 23:40:03 [logger.py:42] Received request cmpl-69e9106f6645443da644d02fad046f3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:03 [async_llm.py:261] Added request cmpl-69e9106f6645443da644d02fad046f3b-0.
INFO 03-01 23:40:04 [logger.py:42] Received request cmpl-51c0ab28018e4874a5fab79005530fc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:04 [async_llm.py:261] Added request cmpl-51c0ab28018e4874a5fab79005530fc2-0.
INFO 03-01 23:40:05 [logger.py:42] Received request cmpl-b6dc6e2d62f24b6b96fa81239608d5b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:05 [async_llm.py:261] Added request cmpl-b6dc6e2d62f24b6b96fa81239608d5b9-0.
INFO 03-01 23:40:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:06 [logger.py:42] Received request cmpl-7a8ad85128664ab08311e1842b260c65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:06 [async_llm.py:261] Added request cmpl-7a8ad85128664ab08311e1842b260c65-0.
INFO 03-01 23:40:07 [logger.py:42] Received request cmpl-f5a529994f9640eba53fcb44396fdf16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:07 [async_llm.py:261] Added request cmpl-f5a529994f9640eba53fcb44396fdf16-0.
INFO 03-01 23:40:08 [logger.py:42] Received request cmpl-c7704dea35b64f30936520b7a02f27be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:08 [async_llm.py:261] Added request cmpl-c7704dea35b64f30936520b7a02f27be-0.
INFO 03-01 23:40:09 [logger.py:42] Received request cmpl-bcf822407796411cb641a7d1bb2d0794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:09 [async_llm.py:261] Added request cmpl-bcf822407796411cb641a7d1bb2d0794-0.
INFO 03-01 23:40:10 [logger.py:42] Received request cmpl-ba5b606a5c054edebc48d6f7b32a7f69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:10 [async_llm.py:261] Added request cmpl-ba5b606a5c054edebc48d6f7b32a7f69-0.
INFO 03-01 23:40:11 [logger.py:42] Received request cmpl-e11093e847a74c9cafa2777612899dd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:11 [async_llm.py:261] Added request cmpl-e11093e847a74c9cafa2777612899dd8-0.
INFO 03-01 23:40:13 [logger.py:42] Received request cmpl-dd3cac23a511431c8f1aafd6a4836318-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:13 [async_llm.py:261] Added request cmpl-dd3cac23a511431c8f1aafd6a4836318-0.
INFO 03-01 23:40:14 [logger.py:42] Received request cmpl-6d1c03b26e9e4a43990507f786381ccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:14 [async_llm.py:261] Added request cmpl-6d1c03b26e9e4a43990507f786381ccc-0.
INFO 03-01 23:40:15 [logger.py:42] Received request cmpl-03fed21afdc9437bb49e5eab8a1b6f52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:15 [async_llm.py:261] Added request cmpl-03fed21afdc9437bb49e5eab8a1b6f52-0.
INFO 03-01 23:40:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:16 [logger.py:42] Received request cmpl-532be0eb98984c74b89ca3f703b0efb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:16 [async_llm.py:261] Added request cmpl-532be0eb98984c74b89ca3f703b0efb7-0.
INFO 03-01 23:40:17 [logger.py:42] Received request cmpl-f268d50617c6485d872ed4a21a93bf2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:17 [async_llm.py:261] Added request cmpl-f268d50617c6485d872ed4a21a93bf2a-0.
INFO 03-01 23:40:18 [logger.py:42] Received request cmpl-4b6db2bbed5f47569b66d845c637f305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:18 [async_llm.py:261] Added request cmpl-4b6db2bbed5f47569b66d845c637f305-0.
INFO 03-01 23:40:19 [logger.py:42] Received request cmpl-bf68be6f85bc41ceb635fcda0faaeb8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:19 [async_llm.py:261] Added request cmpl-bf68be6f85bc41ceb635fcda0faaeb8b-0.
INFO 03-01 23:40:20 [logger.py:42] Received request cmpl-d8de015419ee482999142c4d3eaff2aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:20 [async_llm.py:261] Added request cmpl-d8de015419ee482999142c4d3eaff2aa-0.
INFO 03-01 23:40:21 [logger.py:42] Received request cmpl-bed70f8329f74459a3d1fd9a3bcd3d65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:21 [async_llm.py:261] Added request cmpl-bed70f8329f74459a3d1fd9a3bcd3d65-0.
INFO 03-01 23:40:22 [logger.py:42] Received request cmpl-2224807521414993980c89ea83cf9d48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:22 [async_llm.py:261] Added request cmpl-2224807521414993980c89ea83cf9d48-0.
INFO 03-01 23:40:24 [logger.py:42] Received request cmpl-b20f83c7ae4443ee8f7456184e9d700b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:24 [async_llm.py:261] Added request cmpl-b20f83c7ae4443ee8f7456184e9d700b-0.
INFO 03-01 23:40:25 [logger.py:42] Received request cmpl-36cba7faad7a4fb295f42a22b458ea3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:25 [async_llm.py:261] Added request cmpl-36cba7faad7a4fb295f42a22b458ea3e-0.
INFO 03-01 23:40:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:26 [logger.py:42] Received request cmpl-16bdd231641c49c288254cbf1eeb30ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:26 [async_llm.py:261] Added request cmpl-16bdd231641c49c288254cbf1eeb30ef-0.
INFO 03-01 23:40:27 [logger.py:42] Received request cmpl-ee62d43ea1f34b829521ec997a06b509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:27 [async_llm.py:261] Added request cmpl-ee62d43ea1f34b829521ec997a06b509-0.
INFO 03-01 23:40:28 [logger.py:42] Received request cmpl-bd14e9bafdb54b93a2b17c65e8b3adb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:28 [async_llm.py:261] Added request cmpl-bd14e9bafdb54b93a2b17c65e8b3adb7-0.
INFO 03-01 23:40:29 [logger.py:42] Received request cmpl-19eb408b6c8e4f139ec2eac8e428fc8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:29 [async_llm.py:261] Added request cmpl-19eb408b6c8e4f139ec2eac8e428fc8d-0.
INFO 03-01 23:40:30 [logger.py:42] Received request cmpl-be560284e0414a8fa3af5c9fe19c61d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:30 [async_llm.py:261] Added request cmpl-be560284e0414a8fa3af5c9fe19c61d1-0.
INFO 03-01 23:40:31 [logger.py:42] Received request cmpl-c0bb5d61fe09431db13f1dfcbeabd4c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:31 [async_llm.py:261] Added request cmpl-c0bb5d61fe09431db13f1dfcbeabd4c5-0.
INFO 03-01 23:40:32 [logger.py:42] Received request cmpl-3e96d89c96934832b130089f247e7c87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:32 [async_llm.py:261] Added request cmpl-3e96d89c96934832b130089f247e7c87-0.
INFO 03-01 23:40:33 [logger.py:42] Received request cmpl-f7925f0546fe46be84222dffeb3f37b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:33 [async_llm.py:261] Added request cmpl-f7925f0546fe46be84222dffeb3f37b3-0.
INFO 03-01 23:40:34 [logger.py:42] Received request cmpl-a29728ba3ac546ffb7973df36ef32fb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:34 [async_llm.py:261] Added request cmpl-a29728ba3ac546ffb7973df36ef32fb4-0.
INFO 03-01 23:40:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:36 [logger.py:42] Received request cmpl-e5a784ca3d234fa895676e36ec0e4ecf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:36 [async_llm.py:261] Added request cmpl-e5a784ca3d234fa895676e36ec0e4ecf-0.
INFO 03-01 23:40:37 [logger.py:42] Received request cmpl-332e64c2904a4d228f5ae212273c0561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:37 [async_llm.py:261] Added request cmpl-332e64c2904a4d228f5ae212273c0561-0.
INFO 03-01 23:40:38 [logger.py:42] Received request cmpl-cfd0eab0f3e44e3c9777e6387ec9b949-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:38 [async_llm.py:261] Added request cmpl-cfd0eab0f3e44e3c9777e6387ec9b949-0.
INFO 03-01 23:40:39 [logger.py:42] Received request cmpl-f7622b883ba14dc1be9670658da6e6f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:39 [async_llm.py:261] Added request cmpl-f7622b883ba14dc1be9670658da6e6f7-0.
INFO 03-01 23:40:40 [logger.py:42] Received request cmpl-bc8e8189f502426fb1f3dd2144522c58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:40 [async_llm.py:261] Added request cmpl-bc8e8189f502426fb1f3dd2144522c58-0.
INFO 03-01 23:40:41 [logger.py:42] Received request cmpl-e90ea0a98ff14d67af55e72bd23fe0f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:41 [async_llm.py:261] Added request cmpl-e90ea0a98ff14d67af55e72bd23fe0f1-0.
INFO 03-01 23:40:42 [logger.py:42] Received request cmpl-394638270fa94848a53f8a938c2af759-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:42 [async_llm.py:261] Added request cmpl-394638270fa94848a53f8a938c2af759-0.
INFO 03-01 23:40:43 [logger.py:42] Received request cmpl-5b0b1949968046f5994c1b706c00274e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:43 [async_llm.py:261] Added request cmpl-5b0b1949968046f5994c1b706c00274e-0.
INFO 03-01 23:40:44 [logger.py:42] Received request cmpl-c2cff6ae6264414e91e12abf9febf8b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:44 [async_llm.py:261] Added request cmpl-c2cff6ae6264414e91e12abf9febf8b1-0.
INFO 03-01 23:40:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:45 [logger.py:42] Received request cmpl-07c0592f965f4ee6a60839be3855685d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:45 [async_llm.py:261] Added request cmpl-07c0592f965f4ee6a60839be3855685d-0.
INFO 03-01 23:40:46 [logger.py:42] Received request cmpl-bb4d65de462345ae99d1419d79895f76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:46 [async_llm.py:261] Added request cmpl-bb4d65de462345ae99d1419d79895f76-0.
INFO 03-01 23:40:48 [logger.py:42] Received request cmpl-32765657c96548139b51ce43c7ff07de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:48 [async_llm.py:261] Added request cmpl-32765657c96548139b51ce43c7ff07de-0.
INFO 03-01 23:40:49 [logger.py:42] Received request cmpl-cd93aa9381c545f890f0c880ce0498c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:49 [async_llm.py:261] Added request cmpl-cd93aa9381c545f890f0c880ce0498c0-0.
INFO 03-01 23:40:50 [logger.py:42] Received request cmpl-c0d1ae4a0ab84842b1e07fd91cdddc41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:50 [async_llm.py:261] Added request cmpl-c0d1ae4a0ab84842b1e07fd91cdddc41-0.
INFO 03-01 23:40:51 [logger.py:42] Received request cmpl-d7861050bf894f32888a8c096a01bd1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:51 [async_llm.py:261] Added request cmpl-d7861050bf894f32888a8c096a01bd1c-0.
INFO 03-01 23:40:52 [logger.py:42] Received request cmpl-0b833cd2d24e400b9d4230d75ec53e8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:52 [async_llm.py:261] Added request cmpl-0b833cd2d24e400b9d4230d75ec53e8b-0.
INFO 03-01 23:40:53 [logger.py:42] Received request cmpl-da5668dad8784d499ed5558d278b485b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:53 [async_llm.py:261] Added request cmpl-da5668dad8784d499ed5558d278b485b-0.
INFO 03-01 23:40:54 [logger.py:42] Received request cmpl-0141b7466de343079152860193084214-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:54 [async_llm.py:261] Added request cmpl-0141b7466de343079152860193084214-0.
INFO 03-01 23:40:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:40:55 [logger.py:42] Received request cmpl-a3bca7d42578479dbb11490e7f3d0638-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:55 [async_llm.py:261] Added request cmpl-a3bca7d42578479dbb11490e7f3d0638-0.
INFO 03-01 23:40:56 [logger.py:42] Received request cmpl-c3421427ccf64fc6a55daf90970e1d94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:56 [async_llm.py:261] Added request cmpl-c3421427ccf64fc6a55daf90970e1d94-0.
INFO 03-01 23:40:57 [logger.py:42] Received request cmpl-08c7d7da039f4f308c30fc6c1aaa9fba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:57 [async_llm.py:261] Added request cmpl-08c7d7da039f4f308c30fc6c1aaa9fba-0.
INFO 03-01 23:40:58 [logger.py:42] Received request cmpl-a464424b5eef49668b9a48b37692b166-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:40:58 [async_llm.py:261] Added request cmpl-a464424b5eef49668b9a48b37692b166-0.
INFO 03-01 23:41:00 [logger.py:42] Received request cmpl-50d21649acd44ea595016cada4049919-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:00 [async_llm.py:261] Added request cmpl-50d21649acd44ea595016cada4049919-0.
INFO 03-01 23:41:01 [logger.py:42] Received request cmpl-17058042921746af85eaa7b4bbb99603-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:01 [async_llm.py:261] Added request cmpl-17058042921746af85eaa7b4bbb99603-0.
INFO 03-01 23:41:02 [logger.py:42] Received request cmpl-470a318c29024e25a18191f0f239da15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:02 [async_llm.py:261] Added request cmpl-470a318c29024e25a18191f0f239da15-0.
INFO 03-01 23:41:03 [logger.py:42] Received request cmpl-2deadcacb4384f5e827a2a1a5f357e26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:03 [async_llm.py:261] Added request cmpl-2deadcacb4384f5e827a2a1a5f357e26-0.
INFO 03-01 23:41:04 [logger.py:42] Received request cmpl-41c203cf04ef458fa2ee6b0b4207050c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:04 [async_llm.py:261] Added request cmpl-41c203cf04ef458fa2ee6b0b4207050c-0.
INFO 03-01 23:41:05 [logger.py:42] Received request cmpl-755282a2fbed4a6597fe92fd1fa3a57f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:05 [async_llm.py:261] Added request cmpl-755282a2fbed4a6597fe92fd1fa3a57f-0.
INFO 03-01 23:41:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:06 [logger.py:42] Received request cmpl-6d5af596d61c46de8a5be0b606e86e4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:06 [async_llm.py:261] Added request cmpl-6d5af596d61c46de8a5be0b606e86e4b-0.
INFO 03-01 23:41:07 [logger.py:42] Received request cmpl-ab533c15deb1488b80e35c457773256d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:07 [async_llm.py:261] Added request cmpl-ab533c15deb1488b80e35c457773256d-0.
INFO 03-01 23:41:08 [logger.py:42] Received request cmpl-54513a1672624683ac41418e101d563e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:08 [async_llm.py:261] Added request cmpl-54513a1672624683ac41418e101d563e-0.
INFO 03-01 23:41:09 [logger.py:42] Received request cmpl-d740cb37511d4a78a515eab408804202-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:09 [async_llm.py:261] Added request cmpl-d740cb37511d4a78a515eab408804202-0.
INFO 03-01 23:41:11 [logger.py:42] Received request cmpl-6c3a73ba659c4e1ab63bb3d5d6a5d3d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:11 [async_llm.py:261] Added request cmpl-6c3a73ba659c4e1ab63bb3d5d6a5d3d5-0.
INFO 03-01 23:41:12 [logger.py:42] Received request cmpl-3a12511f2a9a441391e617be7ef4869d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:12 [async_llm.py:261] Added request cmpl-3a12511f2a9a441391e617be7ef4869d-0.
INFO 03-01 23:41:13 [logger.py:42] Received request cmpl-c721b30710b143d1adcee0f58f122fae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:13 [async_llm.py:261] Added request cmpl-c721b30710b143d1adcee0f58f122fae-0.
INFO 03-01 23:41:14 [logger.py:42] Received request cmpl-5a16601fa3bd4b71ba9a81b3bd066928-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:14 [async_llm.py:261] Added request cmpl-5a16601fa3bd4b71ba9a81b3bd066928-0.
INFO 03-01 23:41:15 [logger.py:42] Received request cmpl-17554e6907814ea790343da8ce6a123f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:15 [async_llm.py:261] Added request cmpl-17554e6907814ea790343da8ce6a123f-0.
INFO 03-01 23:41:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:16 [logger.py:42] Received request cmpl-ddffa28830314101a0c939a9c176e1fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:16 [async_llm.py:261] Added request cmpl-ddffa28830314101a0c939a9c176e1fb-0.
INFO 03-01 23:41:17 [logger.py:42] Received request cmpl-81bea345c81b4b7aa1b56f6ae031e3f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:17 [async_llm.py:261] Added request cmpl-81bea345c81b4b7aa1b56f6ae031e3f0-0.
INFO 03-01 23:41:18 [logger.py:42] Received request cmpl-7c30345fcd1a4ab58cb16f34b3fd57a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:18 [async_llm.py:261] Added request cmpl-7c30345fcd1a4ab58cb16f34b3fd57a4-0.
INFO 03-01 23:41:19 [logger.py:42] Received request cmpl-995a30a3a9574a26a51813b4f441e2fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:19 [async_llm.py:261] Added request cmpl-995a30a3a9574a26a51813b4f441e2fb-0.
INFO 03-01 23:41:20 [logger.py:42] Received request cmpl-410863cc6990492ab50a5a3614637139-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:20 [async_llm.py:261] Added request cmpl-410863cc6990492ab50a5a3614637139-0.
INFO 03-01 23:41:21 [logger.py:42] Received request cmpl-a7cd3a34e6114a9fb1e197439eb94007-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:21 [async_llm.py:261] Added request cmpl-a7cd3a34e6114a9fb1e197439eb94007-0.
INFO 03-01 23:41:23 [logger.py:42] Received request cmpl-ddb0ee3ca9a34457a50cf26da249bfd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:23 [async_llm.py:261] Added request cmpl-ddb0ee3ca9a34457a50cf26da249bfd5-0.
INFO 03-01 23:41:24 [logger.py:42] Received request cmpl-ac07dff1192f4ea894889bbec8f3c0a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:24 [async_llm.py:261] Added request cmpl-ac07dff1192f4ea894889bbec8f3c0a4-0.
INFO 03-01 23:41:25 [logger.py:42] Received request cmpl-e23a1b65974f4b5fbe57f7919a428228-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:25 [async_llm.py:261] Added request cmpl-e23a1b65974f4b5fbe57f7919a428228-0.
INFO 03-01 23:41:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:26 [logger.py:42] Received request cmpl-9cca7d9cdfbc485faa0dd6b787c6fa5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:26 [async_llm.py:261] Added request cmpl-9cca7d9cdfbc485faa0dd6b787c6fa5b-0.
INFO 03-01 23:41:27 [logger.py:42] Received request cmpl-4f9e2f1633114c68879baeaba63f3e3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:27 [async_llm.py:261] Added request cmpl-4f9e2f1633114c68879baeaba63f3e3c-0.
INFO 03-01 23:41:28 [logger.py:42] Received request cmpl-a2cb73656d164e3a87096d58807b0132-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:28 [async_llm.py:261] Added request cmpl-a2cb73656d164e3a87096d58807b0132-0.
INFO 03-01 23:41:29 [logger.py:42] Received request cmpl-3843684ec70e40c09cb1b965760a4f28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:29 [async_llm.py:261] Added request cmpl-3843684ec70e40c09cb1b965760a4f28-0.
INFO 03-01 23:41:30 [logger.py:42] Received request cmpl-3ac125910fc74bce8d2c3427e129924a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:30 [async_llm.py:261] Added request cmpl-3ac125910fc74bce8d2c3427e129924a-0.
INFO 03-01 23:41:31 [logger.py:42] Received request cmpl-7e6ce8cdb3b042d39c0bf651004dabfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:31 [async_llm.py:261] Added request cmpl-7e6ce8cdb3b042d39c0bf651004dabfe-0.
INFO 03-01 23:41:32 [logger.py:42] Received request cmpl-a8a89deb73254f76a7b791b3a76b1d93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:32 [async_llm.py:261] Added request cmpl-a8a89deb73254f76a7b791b3a76b1d93-0.
INFO 03-01 23:41:33 [logger.py:42] Received request cmpl-4b3e5507835b45d58826c358aa7507b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:33 [async_llm.py:261] Added request cmpl-4b3e5507835b45d58826c358aa7507b5-0.
INFO 03-01 23:41:35 [logger.py:42] Received request cmpl-97bc314fe8c54217995f35624349d26f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:35 [async_llm.py:261] Added request cmpl-97bc314fe8c54217995f35624349d26f-0.
INFO 03-01 23:41:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:36 [logger.py:42] Received request cmpl-8f70a729407a492486352428b96cb69d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:36 [async_llm.py:261] Added request cmpl-8f70a729407a492486352428b96cb69d-0.
INFO 03-01 23:41:37 [logger.py:42] Received request cmpl-d6ff26bd77ed495aa3f525de663dce9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:37 [async_llm.py:261] Added request cmpl-d6ff26bd77ed495aa3f525de663dce9e-0.
INFO 03-01 23:41:38 [logger.py:42] Received request cmpl-a95b73e730714b819901c19af60dc384-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:38 [async_llm.py:261] Added request cmpl-a95b73e730714b819901c19af60dc384-0.
INFO 03-01 23:41:39 [logger.py:42] Received request cmpl-5039e48a91304d81b590d362af56a086-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:39 [async_llm.py:261] Added request cmpl-5039e48a91304d81b590d362af56a086-0.
INFO 03-01 23:41:40 [logger.py:42] Received request cmpl-b326afcc89d74188927194717f9d954f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:40 [async_llm.py:261] Added request cmpl-b326afcc89d74188927194717f9d954f-0.
INFO 03-01 23:41:41 [logger.py:42] Received request cmpl-ba1b8dbaf2484278a86ac278e869730e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:41 [async_llm.py:261] Added request cmpl-ba1b8dbaf2484278a86ac278e869730e-0.
INFO 03-01 23:41:42 [logger.py:42] Received request cmpl-440b1f4de8874bfcbb107ef2474a85da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:42 [async_llm.py:261] Added request cmpl-440b1f4de8874bfcbb107ef2474a85da-0.
INFO 03-01 23:41:43 [logger.py:42] Received request cmpl-8b1bcca2cd9049c0b6b3f3c0a9ecf787-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:43 [async_llm.py:261] Added request cmpl-8b1bcca2cd9049c0b6b3f3c0a9ecf787-0.
INFO 03-01 23:41:44 [logger.py:42] Received request cmpl-c988aba542ec4437b3fd47bdf9179602-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:44 [async_llm.py:261] Added request cmpl-c988aba542ec4437b3fd47bdf9179602-0.
INFO 03-01 23:41:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:45 [logger.py:42] Received request cmpl-4ae9d0b9012542af9d3b26af612b479d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:45 [async_llm.py:261] Added request cmpl-4ae9d0b9012542af9d3b26af612b479d-0.
INFO 03-01 23:41:47 [logger.py:42] Received request cmpl-1113622205c74585ab9dd564322cc011-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:47 [async_llm.py:261] Added request cmpl-1113622205c74585ab9dd564322cc011-0.
INFO 03-01 23:41:48 [logger.py:42] Received request cmpl-45b71593792640d59ff708b5ae558fef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:48 [async_llm.py:261] Added request cmpl-45b71593792640d59ff708b5ae558fef-0.
INFO 03-01 23:41:49 [logger.py:42] Received request cmpl-a82f9d0357db403d89f00776834b3800-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:49 [async_llm.py:261] Added request cmpl-a82f9d0357db403d89f00776834b3800-0.
INFO 03-01 23:41:50 [logger.py:42] Received request cmpl-f69d26b2030a4bf1b19af9af7dadd207-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:50 [async_llm.py:261] Added request cmpl-f69d26b2030a4bf1b19af9af7dadd207-0.
INFO 03-01 23:41:51 [logger.py:42] Received request cmpl-63de0d3d05cf4853aedbf8b9c3d7fb5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:51 [async_llm.py:261] Added request cmpl-63de0d3d05cf4853aedbf8b9c3d7fb5a-0.
INFO 03-01 23:41:52 [logger.py:42] Received request cmpl-aedae8218e3f41f9b5516de6b2639512-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:52 [async_llm.py:261] Added request cmpl-aedae8218e3f41f9b5516de6b2639512-0.
INFO 03-01 23:41:53 [logger.py:42] Received request cmpl-c66d2b1f3add46e896c6ebec826b0634-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:53 [async_llm.py:261] Added request cmpl-c66d2b1f3add46e896c6ebec826b0634-0.
INFO 03-01 23:41:54 [logger.py:42] Received request cmpl-be5ab01f5c38427295203605e1ea039b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:54 [async_llm.py:261] Added request cmpl-be5ab01f5c38427295203605e1ea039b-0.
INFO 03-01 23:41:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:41:55 [logger.py:42] Received request cmpl-0fd2e5b2503d46f29d1ecbcb3218f77e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:55 [async_llm.py:261] Added request cmpl-0fd2e5b2503d46f29d1ecbcb3218f77e-0.
INFO 03-01 23:41:56 [logger.py:42] Received request cmpl-e6c9a1d6a8a043cabee85c97b219f89c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:56 [async_llm.py:261] Added request cmpl-e6c9a1d6a8a043cabee85c97b219f89c-0.
INFO 03-01 23:41:57 [logger.py:42] Received request cmpl-5db74811912b4a8da5e208c5b576d9f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:57 [async_llm.py:261] Added request cmpl-5db74811912b4a8da5e208c5b576d9f3-0.
INFO 03-01 23:41:59 [logger.py:42] Received request cmpl-4ba364655540431fb4173043cf355d0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:41:59 [async_llm.py:261] Added request cmpl-4ba364655540431fb4173043cf355d0f-0.
INFO 03-01 23:42:00 [logger.py:42] Received request cmpl-11cc0ed628ea4355a75918f5974ef847-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:00 [async_llm.py:261] Added request cmpl-11cc0ed628ea4355a75918f5974ef847-0.
INFO 03-01 23:42:01 [logger.py:42] Received request cmpl-8e641da172204031a667bc957601233d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:01 [async_llm.py:261] Added request cmpl-8e641da172204031a667bc957601233d-0.
INFO 03-01 23:42:02 [logger.py:42] Received request cmpl-476a9aa826c942e1912998eb17922804-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:02 [async_llm.py:261] Added request cmpl-476a9aa826c942e1912998eb17922804-0.
INFO 03-01 23:42:03 [logger.py:42] Received request cmpl-e5004d35bd9b466caa6b592298380825-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:03 [async_llm.py:261] Added request cmpl-e5004d35bd9b466caa6b592298380825-0.
INFO 03-01 23:42:04 [logger.py:42] Received request cmpl-bc1508e7a4444410ad093a284d5e7245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:04 [async_llm.py:261] Added request cmpl-bc1508e7a4444410ad093a284d5e7245-0.
INFO 03-01 23:42:05 [logger.py:42] Received request cmpl-a233a0b4b4474741847342c91136e064-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:05 [async_llm.py:261] Added request cmpl-a233a0b4b4474741847342c91136e064-0.
INFO 03-01 23:42:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:06 [logger.py:42] Received request cmpl-2b6425ec8e9c44899be829a2f96b2ec6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:06 [async_llm.py:261] Added request cmpl-2b6425ec8e9c44899be829a2f96b2ec6-0.
INFO 03-01 23:42:07 [logger.py:42] Received request cmpl-871a5aeb255f4f13ae08a2caa20d345d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:07 [async_llm.py:261] Added request cmpl-871a5aeb255f4f13ae08a2caa20d345d-0.
INFO 03-01 23:42:08 [logger.py:42] Received request cmpl-8607dd59633a426286c38c676c28d70d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:08 [async_llm.py:261] Added request cmpl-8607dd59633a426286c38c676c28d70d-0.
INFO 03-01 23:42:09 [logger.py:42] Received request cmpl-23f7c3cbc3bd4da8b2efd260e683918f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:09 [async_llm.py:261] Added request cmpl-23f7c3cbc3bd4da8b2efd260e683918f-0.
INFO 03-01 23:42:11 [logger.py:42] Received request cmpl-76b7a44b437648d1a73cca51da1ce1a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:11 [async_llm.py:261] Added request cmpl-76b7a44b437648d1a73cca51da1ce1a0-0.
INFO 03-01 23:42:12 [logger.py:42] Received request cmpl-a6c4ac4a68b647dd99fbd70d16739e32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:12 [async_llm.py:261] Added request cmpl-a6c4ac4a68b647dd99fbd70d16739e32-0.
INFO 03-01 23:42:13 [logger.py:42] Received request cmpl-681577fc493d403a9d24c76fd6ecbd5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:13 [async_llm.py:261] Added request cmpl-681577fc493d403a9d24c76fd6ecbd5c-0.
INFO 03-01 23:42:14 [logger.py:42] Received request cmpl-022a2f9edfb84d23aff238316ac9c3fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:14 [async_llm.py:261] Added request cmpl-022a2f9edfb84d23aff238316ac9c3fc-0.
INFO 03-01 23:42:15 [logger.py:42] Received request cmpl-41d8f9a8e3ae4a8a8a704f8f4b068b52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:15 [async_llm.py:261] Added request cmpl-41d8f9a8e3ae4a8a8a704f8f4b068b52-0.
INFO 03-01 23:42:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:16 [logger.py:42] Received request cmpl-dea165c26e3e4dd6b5e2d81ac943f6b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:16 [async_llm.py:261] Added request cmpl-dea165c26e3e4dd6b5e2d81ac943f6b4-0.
INFO 03-01 23:42:17 [logger.py:42] Received request cmpl-2583ff25f5c643f187d9dce759041fc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:17 [async_llm.py:261] Added request cmpl-2583ff25f5c643f187d9dce759041fc2-0.
INFO 03-01 23:42:18 [logger.py:42] Received request cmpl-350ae42839244d53b6cdbeb8427e22fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:18 [async_llm.py:261] Added request cmpl-350ae42839244d53b6cdbeb8427e22fa-0.
INFO 03-01 23:42:19 [logger.py:42] Received request cmpl-b9f92991f4ef4f119a06fd47f3188ef8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:19 [async_llm.py:261] Added request cmpl-b9f92991f4ef4f119a06fd47f3188ef8-0.
INFO 03-01 23:42:20 [logger.py:42] Received request cmpl-afc54d0fdfd44273b5a0a7944a3dbbc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:20 [async_llm.py:261] Added request cmpl-afc54d0fdfd44273b5a0a7944a3dbbc6-0.
INFO 03-01 23:42:22 [logger.py:42] Received request cmpl-bb8e82173bcb4824ba9eb5e2c6697736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:22 [async_llm.py:261] Added request cmpl-bb8e82173bcb4824ba9eb5e2c6697736-0.
INFO 03-01 23:42:23 [logger.py:42] Received request cmpl-54d2133fa42f4925baae5cd1c0dbec4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:23 [async_llm.py:261] Added request cmpl-54d2133fa42f4925baae5cd1c0dbec4c-0.
INFO 03-01 23:42:24 [logger.py:42] Received request cmpl-14e55de40d33467abf61c5102d9d5834-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:24 [async_llm.py:261] Added request cmpl-14e55de40d33467abf61c5102d9d5834-0.
INFO 03-01 23:42:25 [logger.py:42] Received request cmpl-198a5144b63b4bd592ee9be31571e025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:25 [async_llm.py:261] Added request cmpl-198a5144b63b4bd592ee9be31571e025-0.
INFO 03-01 23:42:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:26 [logger.py:42] Received request cmpl-a79aeec275d4444db0bfa471f57cba21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:26 [async_llm.py:261] Added request cmpl-a79aeec275d4444db0bfa471f57cba21-0.
INFO 03-01 23:42:27 [logger.py:42] Received request cmpl-8fb366ddded4463998d0cc0edb49f902-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:27 [async_llm.py:261] Added request cmpl-8fb366ddded4463998d0cc0edb49f902-0.
INFO 03-01 23:42:28 [logger.py:42] Received request cmpl-d8f6125e96894ff1bdb9eb6b8c6c91a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:28 [async_llm.py:261] Added request cmpl-d8f6125e96894ff1bdb9eb6b8c6c91a9-0.
INFO 03-01 23:42:29 [logger.py:42] Received request cmpl-25cb2189ba034845991de6b53ea934a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:29 [async_llm.py:261] Added request cmpl-25cb2189ba034845991de6b53ea934a5-0.
INFO 03-01 23:42:30 [logger.py:42] Received request cmpl-130e83b3651e49f8acc6dd343487fb72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:30 [async_llm.py:261] Added request cmpl-130e83b3651e49f8acc6dd343487fb72-0.
INFO 03-01 23:42:31 [logger.py:42] Received request cmpl-7c85870608654a2d84ad63666e72ea6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:31 [async_llm.py:261] Added request cmpl-7c85870608654a2d84ad63666e72ea6b-0.
INFO 03-01 23:42:32 [logger.py:42] Received request cmpl-cd0f28eba91d4b31983ea27645765b5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:32 [async_llm.py:261] Added request cmpl-cd0f28eba91d4b31983ea27645765b5c-0.
INFO 03-01 23:42:34 [logger.py:42] Received request cmpl-df223b75a6e14e77b1fad07a95b0a568-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:34 [async_llm.py:261] Added request cmpl-df223b75a6e14e77b1fad07a95b0a568-0.
INFO 03-01 23:42:35 [logger.py:42] Received request cmpl-e71260bee97a4046aba4c1d1b7cd09c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:35 [async_llm.py:261] Added request cmpl-e71260bee97a4046aba4c1d1b7cd09c3-0.
INFO 03-01 23:42:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:36 [logger.py:42] Received request cmpl-07475266be494549a7a141c58e8d07ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:36 [async_llm.py:261] Added request cmpl-07475266be494549a7a141c58e8d07ae-0.
INFO 03-01 23:42:37 [logger.py:42] Received request cmpl-310c01932b574012bf72fe20dbd0e665-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:37 [async_llm.py:261] Added request cmpl-310c01932b574012bf72fe20dbd0e665-0.
INFO 03-01 23:42:38 [logger.py:42] Received request cmpl-9d37452905d140aebef4aa6a47e10c6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:38 [async_llm.py:261] Added request cmpl-9d37452905d140aebef4aa6a47e10c6e-0.
INFO 03-01 23:42:39 [logger.py:42] Received request cmpl-50d149787fcc401e8a42141701d42f6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:39 [async_llm.py:261] Added request cmpl-50d149787fcc401e8a42141701d42f6a-0.
INFO 03-01 23:42:40 [logger.py:42] Received request cmpl-d070bfda107148119dc6208815e813c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:40 [async_llm.py:261] Added request cmpl-d070bfda107148119dc6208815e813c9-0.
INFO 03-01 23:42:41 [logger.py:42] Received request cmpl-54a75b53ed604487b68734a376ccc2a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:41 [async_llm.py:261] Added request cmpl-54a75b53ed604487b68734a376ccc2a0-0.
INFO 03-01 23:42:42 [logger.py:42] Received request cmpl-58c837eaa43c4eb29c89b0ae44a4c1fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:42 [async_llm.py:261] Added request cmpl-58c837eaa43c4eb29c89b0ae44a4c1fb-0.
INFO 03-01 23:42:43 [logger.py:42] Received request cmpl-3bffa411ddcd4c7586ece6ff9daa105f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:43 [async_llm.py:261] Added request cmpl-3bffa411ddcd4c7586ece6ff9daa105f-0.
INFO 03-01 23:42:44 [logger.py:42] Received request cmpl-88139121a307413cac7c8d7c472f9cee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:44 [async_llm.py:261] Added request cmpl-88139121a307413cac7c8d7c472f9cee-0.
INFO 03-01 23:42:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:46 [logger.py:42] Received request cmpl-62ca57459eb74bb2ac35c4422296c798-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:46 [async_llm.py:261] Added request cmpl-62ca57459eb74bb2ac35c4422296c798-0.
INFO 03-01 23:42:47 [logger.py:42] Received request cmpl-8ef0757662f24715870fe254c1ea783a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:47 [async_llm.py:261] Added request cmpl-8ef0757662f24715870fe254c1ea783a-0.
INFO 03-01 23:42:48 [logger.py:42] Received request cmpl-6aedeb4396624f45a556017657a959cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:48 [async_llm.py:261] Added request cmpl-6aedeb4396624f45a556017657a959cc-0.
INFO 03-01 23:42:49 [logger.py:42] Received request cmpl-4b29f66c336843e89d0b2d27698a5c18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:49 [async_llm.py:261] Added request cmpl-4b29f66c336843e89d0b2d27698a5c18-0.
INFO 03-01 23:42:50 [logger.py:42] Received request cmpl-3e2bc9106a004865ae689ca81b4c9a21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:50 [async_llm.py:261] Added request cmpl-3e2bc9106a004865ae689ca81b4c9a21-0.
INFO 03-01 23:42:51 [logger.py:42] Received request cmpl-45d21380681440b39a831f4e54e5be02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:51 [async_llm.py:261] Added request cmpl-45d21380681440b39a831f4e54e5be02-0.
INFO 03-01 23:42:52 [logger.py:42] Received request cmpl-76fabb71cecc4adabb5a7c5adfe171af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:52 [async_llm.py:261] Added request cmpl-76fabb71cecc4adabb5a7c5adfe171af-0.
INFO 03-01 23:42:53 [logger.py:42] Received request cmpl-8a00d5664d454fcabce4d34498b17c93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:53 [async_llm.py:261] Added request cmpl-8a00d5664d454fcabce4d34498b17c93-0.
INFO 03-01 23:42:54 [logger.py:42] Received request cmpl-9fac04b5a0ab4fd08adaa86b51839cac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:54 [async_llm.py:261] Added request cmpl-9fac04b5a0ab4fd08adaa86b51839cac-0.
INFO 03-01 23:42:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:42:55 [logger.py:42] Received request cmpl-7533aaa21a3548e7b96decb03fe06e96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:55 [async_llm.py:261] Added request cmpl-7533aaa21a3548e7b96decb03fe06e96-0.
INFO 03-01 23:42:57 [logger.py:42] Received request cmpl-a45e3df5f93946d5b917e7d745ce059d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:57 [async_llm.py:261] Added request cmpl-a45e3df5f93946d5b917e7d745ce059d-0.
INFO 03-01 23:42:58 [logger.py:42] Received request cmpl-0524de1e999843958605dba04883ddd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:58 [async_llm.py:261] Added request cmpl-0524de1e999843958605dba04883ddd8-0.
INFO 03-01 23:42:59 [logger.py:42] Received request cmpl-afaebed2368942729262cacf24e27470-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:42:59 [async_llm.py:261] Added request cmpl-afaebed2368942729262cacf24e27470-0.
INFO 03-01 23:43:00 [logger.py:42] Received request cmpl-f7e6f949ce994413bf6aa0d79d0efb77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:00 [async_llm.py:261] Added request cmpl-f7e6f949ce994413bf6aa0d79d0efb77-0.
INFO 03-01 23:43:01 [logger.py:42] Received request cmpl-e7c2dd4e70d2493cba846fa33de7fcb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:01 [async_llm.py:261] Added request cmpl-e7c2dd4e70d2493cba846fa33de7fcb6-0.
INFO 03-01 23:43:02 [logger.py:42] Received request cmpl-57ced7b2b7d5412098de699857843c98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:02 [async_llm.py:261] Added request cmpl-57ced7b2b7d5412098de699857843c98-0.
INFO 03-01 23:43:03 [logger.py:42] Received request cmpl-b3918696ba5c42fcb8fbfd2f6afd19ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:03 [async_llm.py:261] Added request cmpl-b3918696ba5c42fcb8fbfd2f6afd19ac-0.
INFO 03-01 23:43:04 [logger.py:42] Received request cmpl-5add52a131de40a38b4f6fb585f2b9f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:04 [async_llm.py:261] Added request cmpl-5add52a131de40a38b4f6fb585f2b9f9-0.
INFO 03-01 23:43:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:05 [logger.py:42] Received request cmpl-8f7b7cb4641043b7b1401fe955fc8da3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:05 [async_llm.py:261] Added request cmpl-8f7b7cb4641043b7b1401fe955fc8da3-0.
INFO 03-01 23:43:06 [logger.py:42] Received request cmpl-c4294d1eacfe456d89587329942923c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:06 [async_llm.py:261] Added request cmpl-c4294d1eacfe456d89587329942923c9-0.
INFO 03-01 23:43:07 [logger.py:42] Received request cmpl-73e6477e63f84d71a163c874e880ee1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:07 [async_llm.py:261] Added request cmpl-73e6477e63f84d71a163c874e880ee1e-0.
INFO 03-01 23:43:09 [logger.py:42] Received request cmpl-f6a266e9d5524b6c8174340af19e0a3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:09 [async_llm.py:261] Added request cmpl-f6a266e9d5524b6c8174340af19e0a3b-0.
INFO 03-01 23:43:10 [logger.py:42] Received request cmpl-80d1d96552864937807306b79885ed00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:10 [async_llm.py:261] Added request cmpl-80d1d96552864937807306b79885ed00-0.
INFO 03-01 23:43:11 [logger.py:42] Received request cmpl-3ebff32fc8f5447cac54b1d57674f3c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:11 [async_llm.py:261] Added request cmpl-3ebff32fc8f5447cac54b1d57674f3c7-0.
INFO 03-01 23:43:12 [logger.py:42] Received request cmpl-1616e3c20fd74b7c9c5a272d19d37d03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:12 [async_llm.py:261] Added request cmpl-1616e3c20fd74b7c9c5a272d19d37d03-0.
INFO 03-01 23:43:13 [logger.py:42] Received request cmpl-112afe3767eb402db5e2b71cf2027bbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:13 [async_llm.py:261] Added request cmpl-112afe3767eb402db5e2b71cf2027bbd-0.
INFO 03-01 23:43:14 [logger.py:42] Received request cmpl-ab415618fd4b4f9483e23ea866637510-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:14 [async_llm.py:261] Added request cmpl-ab415618fd4b4f9483e23ea866637510-0.
INFO 03-01 23:43:15 [logger.py:42] Received request cmpl-6167e2d40520417ab1dddcd798f9c95e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:15 [async_llm.py:261] Added request cmpl-6167e2d40520417ab1dddcd798f9c95e-0.
INFO 03-01 23:43:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:16 [logger.py:42] Received request cmpl-6f312537cfce48ef92f6b30ec349e5b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:16 [async_llm.py:261] Added request cmpl-6f312537cfce48ef92f6b30ec349e5b1-0.
INFO 03-01 23:43:17 [logger.py:42] Received request cmpl-49d733b150d448b09c78023dcda6a3a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:17 [async_llm.py:261] Added request cmpl-49d733b150d448b09c78023dcda6a3a4-0.
INFO 03-01 23:43:18 [logger.py:42] Received request cmpl-d2f735d40a6f4b67ae0b205e2a535634-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:18 [async_llm.py:261] Added request cmpl-d2f735d40a6f4b67ae0b205e2a535634-0.
INFO 03-01 23:43:19 [logger.py:42] Received request cmpl-a68531f093c342f28f245e77b263bbc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:19 [async_llm.py:261] Added request cmpl-a68531f093c342f28f245e77b263bbc4-0.
INFO 03-01 23:43:21 [logger.py:42] Received request cmpl-cb0e39d769824e3081a3731529e66191-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:21 [async_llm.py:261] Added request cmpl-cb0e39d769824e3081a3731529e66191-0.
INFO 03-01 23:43:22 [logger.py:42] Received request cmpl-e7191ad841064586a555ffc9bd54b334-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:22 [async_llm.py:261] Added request cmpl-e7191ad841064586a555ffc9bd54b334-0.
INFO 03-01 23:43:23 [logger.py:42] Received request cmpl-1dbe8a9c1c364fb7b22f36bc70171273-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:23 [async_llm.py:261] Added request cmpl-1dbe8a9c1c364fb7b22f36bc70171273-0.
INFO 03-01 23:43:24 [logger.py:42] Received request cmpl-72aeec62965f45c780c8b3bafefd7ad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:24 [async_llm.py:261] Added request cmpl-72aeec62965f45c780c8b3bafefd7ad6-0.
INFO 03-01 23:43:25 [logger.py:42] Received request cmpl-60d2047473fb4ef2bbb0e7c401cce270-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:25 [async_llm.py:261] Added request cmpl-60d2047473fb4ef2bbb0e7c401cce270-0.
INFO 03-01 23:43:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:26 [logger.py:42] Received request cmpl-837c77c230494c92b2a205f67de51708-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:26 [async_llm.py:261] Added request cmpl-837c77c230494c92b2a205f67de51708-0.
INFO 03-01 23:43:27 [logger.py:42] Received request cmpl-46656e7812ca4bb6a1d7ab0c7a2bd3ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:27 [async_llm.py:261] Added request cmpl-46656e7812ca4bb6a1d7ab0c7a2bd3ef-0.
INFO 03-01 23:43:28 [logger.py:42] Received request cmpl-d2e1752c9e4f4232aac0f34393104853-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:28 [async_llm.py:261] Added request cmpl-d2e1752c9e4f4232aac0f34393104853-0.
INFO 03-01 23:43:29 [logger.py:42] Received request cmpl-bf5924b8eaea444eaedf8f2d9014f80a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:29 [async_llm.py:261] Added request cmpl-bf5924b8eaea444eaedf8f2d9014f80a-0.
INFO 03-01 23:43:30 [logger.py:42] Received request cmpl-94ca4ff717514260934d56c7bbcca735-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:30 [async_llm.py:261] Added request cmpl-94ca4ff717514260934d56c7bbcca735-0.
INFO 03-01 23:43:32 [logger.py:42] Received request cmpl-d0273935bb974e7db04e4c72c71413f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:32 [async_llm.py:261] Added request cmpl-d0273935bb974e7db04e4c72c71413f4-0.
INFO 03-01 23:43:33 [logger.py:42] Received request cmpl-c040f42ca1744ee791303009316d972c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:33 [async_llm.py:261] Added request cmpl-c040f42ca1744ee791303009316d972c-0.
INFO 03-01 23:43:34 [logger.py:42] Received request cmpl-665cdd2f71a74904b18bf2cc20438903-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:34 [async_llm.py:261] Added request cmpl-665cdd2f71a74904b18bf2cc20438903-0.
INFO 03-01 23:43:35 [logger.py:42] Received request cmpl-3633698aaa7a4b238319f6ed9d1262fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:35 [async_llm.py:261] Added request cmpl-3633698aaa7a4b238319f6ed9d1262fd-0.
INFO 03-01 23:43:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:36 [logger.py:42] Received request cmpl-6e33db5f70b04553a790397954e3245f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:36 [async_llm.py:261] Added request cmpl-6e33db5f70b04553a790397954e3245f-0.
INFO 03-01 23:43:37 [logger.py:42] Received request cmpl-2adfbe1781e943299da6f808dd77c98c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:37 [async_llm.py:261] Added request cmpl-2adfbe1781e943299da6f808dd77c98c-0.
INFO 03-01 23:43:38 [logger.py:42] Received request cmpl-48ebf0e04fb345118802944aac0fd1c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:38 [async_llm.py:261] Added request cmpl-48ebf0e04fb345118802944aac0fd1c9-0.
INFO 03-01 23:43:39 [logger.py:42] Received request cmpl-b0e5883ef40244e281b3f70857de469c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:39 [async_llm.py:261] Added request cmpl-b0e5883ef40244e281b3f70857de469c-0.
INFO 03-01 23:43:40 [logger.py:42] Received request cmpl-9c6dcd0c81c54e64b9e1733bc3ff5576-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:40 [async_llm.py:261] Added request cmpl-9c6dcd0c81c54e64b9e1733bc3ff5576-0.
INFO 03-01 23:43:41 [logger.py:42] Received request cmpl-43b90a9bca844af3ada0a2fab6f16374-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:41 [async_llm.py:261] Added request cmpl-43b90a9bca844af3ada0a2fab6f16374-0.
INFO 03-01 23:43:42 [logger.py:42] Received request cmpl-bbe3d0715c834599b16be0afac75dd16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:42 [async_llm.py:261] Added request cmpl-bbe3d0715c834599b16be0afac75dd16-0.
INFO 03-01 23:43:44 [logger.py:42] Received request cmpl-dc2210ce17c74bac86ff7183bf57cb2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:44 [async_llm.py:261] Added request cmpl-dc2210ce17c74bac86ff7183bf57cb2b-0.
INFO 03-01 23:43:45 [logger.py:42] Received request cmpl-ba145e01a950412cbba8827bac2e45ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:45 [async_llm.py:261] Added request cmpl-ba145e01a950412cbba8827bac2e45ad-0.
INFO 03-01 23:43:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:46 [logger.py:42] Received request cmpl-72d1b5239f4e4af9bc50d82ca700fb09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:46 [async_llm.py:261] Added request cmpl-72d1b5239f4e4af9bc50d82ca700fb09-0.
INFO 03-01 23:43:47 [logger.py:42] Received request cmpl-f4a160d38e604dcaaaff66a56a846022-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:47 [async_llm.py:261] Added request cmpl-f4a160d38e604dcaaaff66a56a846022-0.
INFO 03-01 23:43:48 [logger.py:42] Received request cmpl-798d92bcbf01419aa423748f3f72fe93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:48 [async_llm.py:261] Added request cmpl-798d92bcbf01419aa423748f3f72fe93-0.
INFO 03-01 23:43:49 [logger.py:42] Received request cmpl-68489980f3804792bdefda7e6554a920-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:49 [async_llm.py:261] Added request cmpl-68489980f3804792bdefda7e6554a920-0.
INFO 03-01 23:43:50 [logger.py:42] Received request cmpl-a0152b2d23aa4d07a4a3dbb668ab1b36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:50 [async_llm.py:261] Added request cmpl-a0152b2d23aa4d07a4a3dbb668ab1b36-0.
INFO 03-01 23:43:51 [logger.py:42] Received request cmpl-1a08b89d3e0c49db9e31002bbe9b6ccf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:51 [async_llm.py:261] Added request cmpl-1a08b89d3e0c49db9e31002bbe9b6ccf-0.
INFO 03-01 23:43:52 [logger.py:42] Received request cmpl-8afea0cc33a440e983197de5fe9f5fba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:52 [async_llm.py:261] Added request cmpl-8afea0cc33a440e983197de5fe9f5fba-0.
INFO 03-01 23:43:53 [logger.py:42] Received request cmpl-d963aaa129a34f56b4e70b84f553df03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:53 [async_llm.py:261] Added request cmpl-d963aaa129a34f56b4e70b84f553df03-0.
INFO 03-01 23:43:54 [logger.py:42] Received request cmpl-60958ce559c04206b9703f5524de2696-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:54 [async_llm.py:261] Added request cmpl-60958ce559c04206b9703f5524de2696-0.
INFO 03-01 23:43:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:43:56 [logger.py:42] Received request cmpl-06ec7c7991a947c29e01b6308370fcb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:56 [async_llm.py:261] Added request cmpl-06ec7c7991a947c29e01b6308370fcb8-0.
INFO 03-01 23:43:57 [logger.py:42] Received request cmpl-c821991a48a84ad8be5ba7be236c69eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:57 [async_llm.py:261] Added request cmpl-c821991a48a84ad8be5ba7be236c69eb-0.
INFO 03-01 23:43:58 [logger.py:42] Received request cmpl-b578b50e28d5472293c0db1ef67d22dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:58 [async_llm.py:261] Added request cmpl-b578b50e28d5472293c0db1ef67d22dd-0.
INFO 03-01 23:43:59 [logger.py:42] Received request cmpl-adb57f861f1449c7b1b338fb16eca4a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:43:59 [async_llm.py:261] Added request cmpl-adb57f861f1449c7b1b338fb16eca4a2-0.
INFO 03-01 23:44:00 [logger.py:42] Received request cmpl-1beaf3f005d64459b24298ab75a5d5c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:00 [async_llm.py:261] Added request cmpl-1beaf3f005d64459b24298ab75a5d5c5-0.
INFO 03-01 23:44:01 [logger.py:42] Received request cmpl-04acbf2ea5d24826868f6e66282219d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:01 [async_llm.py:261] Added request cmpl-04acbf2ea5d24826868f6e66282219d1-0.
INFO 03-01 23:44:02 [logger.py:42] Received request cmpl-a20e6e47543d4432a4c3a4336faa7859-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:02 [async_llm.py:261] Added request cmpl-a20e6e47543d4432a4c3a4336faa7859-0.
INFO 03-01 23:44:03 [logger.py:42] Received request cmpl-3a00ec190546424ab5debeabd390435a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:03 [async_llm.py:261] Added request cmpl-3a00ec190546424ab5debeabd390435a-0.
INFO 03-01 23:44:04 [logger.py:42] Received request cmpl-d3086366beff4222947a2ceb6f29f664-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:04 [async_llm.py:261] Added request cmpl-d3086366beff4222947a2ceb6f29f664-0.
INFO 03-01 23:44:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:05 [logger.py:42] Received request cmpl-1596be0635524c3d85067f64d994783e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:05 [async_llm.py:261] Added request cmpl-1596be0635524c3d85067f64d994783e-0.
INFO 03-01 23:44:07 [logger.py:42] Received request cmpl-fe5109abe87f4609b321696117641881-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:07 [async_llm.py:261] Added request cmpl-fe5109abe87f4609b321696117641881-0.
INFO 03-01 23:44:08 [logger.py:42] Received request cmpl-43ba0a4b2dfa49f5b944bf4e487db25f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:08 [async_llm.py:261] Added request cmpl-43ba0a4b2dfa49f5b944bf4e487db25f-0.
INFO 03-01 23:44:09 [logger.py:42] Received request cmpl-6fbbde06c76f4513b872de264fd32b7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:09 [async_llm.py:261] Added request cmpl-6fbbde06c76f4513b872de264fd32b7f-0.
INFO 03-01 23:44:10 [logger.py:42] Received request cmpl-e3a5ba0d12eb4f55a2d6822f82a30429-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:10 [async_llm.py:261] Added request cmpl-e3a5ba0d12eb4f55a2d6822f82a30429-0.
INFO 03-01 23:44:11 [logger.py:42] Received request cmpl-85a3a24cf0694a3ba265aa938d175830-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:11 [async_llm.py:261] Added request cmpl-85a3a24cf0694a3ba265aa938d175830-0.
INFO 03-01 23:44:12 [logger.py:42] Received request cmpl-ae3a8017e54c4c8ba41f16fbfb5557e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:12 [async_llm.py:261] Added request cmpl-ae3a8017e54c4c8ba41f16fbfb5557e9-0.
INFO 03-01 23:44:13 [logger.py:42] Received request cmpl-2f0a58595f274188ac52a63744e34798-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:13 [async_llm.py:261] Added request cmpl-2f0a58595f274188ac52a63744e34798-0.
INFO 03-01 23:44:14 [logger.py:42] Received request cmpl-90a462dd2e2f4e28b22fb1801532c8d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:14 [async_llm.py:261] Added request cmpl-90a462dd2e2f4e28b22fb1801532c8d1-0.
INFO 03-01 23:44:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:15 [logger.py:42] Received request cmpl-9ce33abd0a2a428087f07343aa557c2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:15 [async_llm.py:261] Added request cmpl-9ce33abd0a2a428087f07343aa557c2b-0.
INFO 03-01 23:44:16 [logger.py:42] Received request cmpl-78a541c6059549f7b02128fc41134f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:16 [async_llm.py:261] Added request cmpl-78a541c6059549f7b02128fc41134f64-0.
INFO 03-01 23:44:17 [logger.py:42] Received request cmpl-0ab72882af5545de89024c3a532eb748-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:17 [async_llm.py:261] Added request cmpl-0ab72882af5545de89024c3a532eb748-0.
INFO 03-01 23:44:19 [logger.py:42] Received request cmpl-df01fc561ffe441c92a34e6202e4fcd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:19 [async_llm.py:261] Added request cmpl-df01fc561ffe441c92a34e6202e4fcd2-0.
INFO 03-01 23:44:20 [logger.py:42] Received request cmpl-9eef811f59474e3598b0e4129ace3f97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:20 [async_llm.py:261] Added request cmpl-9eef811f59474e3598b0e4129ace3f97-0.
INFO 03-01 23:44:21 [logger.py:42] Received request cmpl-312ced46d43f4810aca3f39b767c94e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:21 [async_llm.py:261] Added request cmpl-312ced46d43f4810aca3f39b767c94e1-0.
INFO 03-01 23:44:22 [logger.py:42] Received request cmpl-12811b23d77c4fedb7614d580fb8e1e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:22 [async_llm.py:261] Added request cmpl-12811b23d77c4fedb7614d580fb8e1e4-0.
INFO 03-01 23:44:23 [logger.py:42] Received request cmpl-9120a307e2564249ae5ba4ed560798aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:23 [async_llm.py:261] Added request cmpl-9120a307e2564249ae5ba4ed560798aa-0.
INFO 03-01 23:44:24 [logger.py:42] Received request cmpl-f3c548d89c09444d93e7745fbe74c347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:24 [async_llm.py:261] Added request cmpl-f3c548d89c09444d93e7745fbe74c347-0.
INFO 03-01 23:44:25 [logger.py:42] Received request cmpl-3994ee3bcf1c4ab489335abca9ccbba6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:25 [async_llm.py:261] Added request cmpl-3994ee3bcf1c4ab489335abca9ccbba6-0.
INFO 03-01 23:44:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:26 [logger.py:42] Received request cmpl-a4c62fbbb859479aa92d108be413ca36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:26 [async_llm.py:261] Added request cmpl-a4c62fbbb859479aa92d108be413ca36-0.
INFO 03-01 23:44:27 [logger.py:42] Received request cmpl-0214220853d1448f8182191894e3a8cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:27 [async_llm.py:261] Added request cmpl-0214220853d1448f8182191894e3a8cf-0.
INFO 03-01 23:44:28 [logger.py:42] Received request cmpl-cc611cd175874f84a35ec1c10aa6e14d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:28 [async_llm.py:261] Added request cmpl-cc611cd175874f84a35ec1c10aa6e14d-0.
INFO 03-01 23:44:29 [logger.py:42] Received request cmpl-949ce13c26864045a933784c58fd0ce8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:29 [async_llm.py:261] Added request cmpl-949ce13c26864045a933784c58fd0ce8-0.
INFO 03-01 23:44:31 [logger.py:42] Received request cmpl-ba4d2a40493642d880bcf1232ac3b422-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:31 [async_llm.py:261] Added request cmpl-ba4d2a40493642d880bcf1232ac3b422-0.
INFO 03-01 23:44:32 [logger.py:42] Received request cmpl-ac76373276b04424aca12b8edbad2d0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:32 [async_llm.py:261] Added request cmpl-ac76373276b04424aca12b8edbad2d0d-0.
INFO 03-01 23:44:33 [logger.py:42] Received request cmpl-fe819dcfa5b645ba924c4f8029a8e41b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:33 [async_llm.py:261] Added request cmpl-fe819dcfa5b645ba924c4f8029a8e41b-0.
INFO 03-01 23:44:34 [logger.py:42] Received request cmpl-7f8da6637476446e8cc7e7b1b5583c2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:34 [async_llm.py:261] Added request cmpl-7f8da6637476446e8cc7e7b1b5583c2f-0.
INFO 03-01 23:44:35 [logger.py:42] Received request cmpl-69488e72f8c6471ebab6bf3e07987031-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:35 [async_llm.py:261] Added request cmpl-69488e72f8c6471ebab6bf3e07987031-0.
INFO 03-01 23:44:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:36 [logger.py:42] Received request cmpl-ea376e70b4914afbae4a9787fea522c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:36 [async_llm.py:261] Added request cmpl-ea376e70b4914afbae4a9787fea522c4-0.
INFO 03-01 23:44:37 [logger.py:42] Received request cmpl-18ba2ee5df394920beb2826e06fef9f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:37 [async_llm.py:261] Added request cmpl-18ba2ee5df394920beb2826e06fef9f1-0.
INFO 03-01 23:44:38 [logger.py:42] Received request cmpl-598e9a25a3b9467d81423605951fe8b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:38 [async_llm.py:261] Added request cmpl-598e9a25a3b9467d81423605951fe8b1-0.
INFO 03-01 23:44:39 [logger.py:42] Received request cmpl-955a319c341044dfb49ef9712516e867-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:39 [async_llm.py:261] Added request cmpl-955a319c341044dfb49ef9712516e867-0.
INFO 03-01 23:44:40 [logger.py:42] Received request cmpl-a082bfd55c22414b87d76b9df26b9932-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:40 [async_llm.py:261] Added request cmpl-a082bfd55c22414b87d76b9df26b9932-0.
INFO 03-01 23:44:42 [logger.py:42] Received request cmpl-53c6d1a215f44bc08cd32df1e683f534-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:42 [async_llm.py:261] Added request cmpl-53c6d1a215f44bc08cd32df1e683f534-0.
INFO 03-01 23:44:43 [logger.py:42] Received request cmpl-700d4c75ff664d29805d2abc4d416f8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:43 [async_llm.py:261] Added request cmpl-700d4c75ff664d29805d2abc4d416f8a-0.
INFO 03-01 23:44:44 [logger.py:42] Received request cmpl-dac354dbd7164a029de5176bcad7021c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:44 [async_llm.py:261] Added request cmpl-dac354dbd7164a029de5176bcad7021c-0.
INFO 03-01 23:44:45 [logger.py:42] Received request cmpl-77940de0b63047b28b8264342576a0d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:45 [async_llm.py:261] Added request cmpl-77940de0b63047b28b8264342576a0d3-0.
INFO 03-01 23:44:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:46 [logger.py:42] Received request cmpl-f5802c13a7884442a51fab0685b7ff22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:46 [async_llm.py:261] Added request cmpl-f5802c13a7884442a51fab0685b7ff22-0.
INFO 03-01 23:44:47 [logger.py:42] Received request cmpl-36e822a490d94c13b897a373e2895a84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:47 [async_llm.py:261] Added request cmpl-36e822a490d94c13b897a373e2895a84-0.
INFO 03-01 23:44:48 [logger.py:42] Received request cmpl-f9a48de37bb640aa8d70fad82e124729-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:48 [async_llm.py:261] Added request cmpl-f9a48de37bb640aa8d70fad82e124729-0.
INFO 03-01 23:44:49 [logger.py:42] Received request cmpl-151b6b83647c4066acdb1173c1425db5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:49 [async_llm.py:261] Added request cmpl-151b6b83647c4066acdb1173c1425db5-0.
INFO 03-01 23:44:50 [logger.py:42] Received request cmpl-f0b043ada10b459b89aa30cfad897820-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:50 [async_llm.py:261] Added request cmpl-f0b043ada10b459b89aa30cfad897820-0.
INFO 03-01 23:44:51 [logger.py:42] Received request cmpl-f9d6355c7c8947739125b1b77d4f0540-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:51 [async_llm.py:261] Added request cmpl-f9d6355c7c8947739125b1b77d4f0540-0.
INFO 03-01 23:44:52 [logger.py:42] Received request cmpl-e952aefcf27a461f9687f80998b8c582-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:52 [async_llm.py:261] Added request cmpl-e952aefcf27a461f9687f80998b8c582-0.
INFO 03-01 23:44:54 [logger.py:42] Received request cmpl-3b2f3ca9d4d54a0581e2e00ead62ea16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:54 [async_llm.py:261] Added request cmpl-3b2f3ca9d4d54a0581e2e00ead62ea16-0.
INFO 03-01 23:44:55 [logger.py:42] Received request cmpl-003edc815ceb4d7483d81d6ff175448b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:55 [async_llm.py:261] Added request cmpl-003edc815ceb4d7483d81d6ff175448b-0.
INFO 03-01 23:44:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:44:56 [logger.py:42] Received request cmpl-5181a17bb3c643e382d93c9e35c0fdba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:56 [async_llm.py:261] Added request cmpl-5181a17bb3c643e382d93c9e35c0fdba-0.
INFO 03-01 23:44:57 [logger.py:42] Received request cmpl-6be6cfe1da7444ea8d6fcd6afb85681d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:57 [async_llm.py:261] Added request cmpl-6be6cfe1da7444ea8d6fcd6afb85681d-0.
INFO 03-01 23:44:58 [logger.py:42] Received request cmpl-3cf325d2bfe34f3faa992384be96dcfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:58 [async_llm.py:261] Added request cmpl-3cf325d2bfe34f3faa992384be96dcfd-0.
INFO 03-01 23:44:59 [logger.py:42] Received request cmpl-ced266cfaa0f4b10b57807ff372ce4ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:44:59 [async_llm.py:261] Added request cmpl-ced266cfaa0f4b10b57807ff372ce4ad-0.
INFO 03-01 23:45:00 [logger.py:42] Received request cmpl-42d556e6255044cda544ea128ac0fbc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:00 [async_llm.py:261] Added request cmpl-42d556e6255044cda544ea128ac0fbc4-0.
INFO 03-01 23:45:01 [logger.py:42] Received request cmpl-c8c4c2ff8bf5403d8e1c91eca694fe31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:01 [async_llm.py:261] Added request cmpl-c8c4c2ff8bf5403d8e1c91eca694fe31-0.
INFO 03-01 23:45:02 [logger.py:42] Received request cmpl-0aca1d0a86b84b51aeebd108aad8f6c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:02 [async_llm.py:261] Added request cmpl-0aca1d0a86b84b51aeebd108aad8f6c4-0.
INFO 03-01 23:45:03 [logger.py:42] Received request cmpl-d81d8d5837c7407fa5badc9c76480009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:03 [async_llm.py:261] Added request cmpl-d81d8d5837c7407fa5badc9c76480009-0.
INFO 03-01 23:45:04 [logger.py:42] Received request cmpl-28e46513683a4c3581a0afc58fe11434-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:04 [async_llm.py:261] Added request cmpl-28e46513683a4c3581a0afc58fe11434-0.
INFO 03-01 23:45:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:06 [logger.py:42] Received request cmpl-ff75abc34491408aad70bbd64411bc83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:06 [async_llm.py:261] Added request cmpl-ff75abc34491408aad70bbd64411bc83-0.
INFO 03-01 23:45:07 [logger.py:42] Received request cmpl-a4aaac40b86b4cb6a09c705985eda037-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:07 [async_llm.py:261] Added request cmpl-a4aaac40b86b4cb6a09c705985eda037-0.
INFO 03-01 23:45:08 [logger.py:42] Received request cmpl-029c7e3c48d2414ea117f365d004e269-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:08 [async_llm.py:261] Added request cmpl-029c7e3c48d2414ea117f365d004e269-0.
INFO 03-01 23:45:09 [logger.py:42] Received request cmpl-969e7c319e7c44f999d207d031c03f60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:09 [async_llm.py:261] Added request cmpl-969e7c319e7c44f999d207d031c03f60-0.
INFO 03-01 23:45:10 [logger.py:42] Received request cmpl-8d226516c3214236874ea6881f02087b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:10 [async_llm.py:261] Added request cmpl-8d226516c3214236874ea6881f02087b-0.
INFO 03-01 23:45:11 [logger.py:42] Received request cmpl-26d6670a06784910a8298febfd28601c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:11 [async_llm.py:261] Added request cmpl-26d6670a06784910a8298febfd28601c-0.
INFO 03-01 23:45:12 [logger.py:42] Received request cmpl-4091271b20f94010bfea25cf2aa5acde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:12 [async_llm.py:261] Added request cmpl-4091271b20f94010bfea25cf2aa5acde-0.
INFO 03-01 23:45:13 [logger.py:42] Received request cmpl-4f0b556f55c04c0fa21a5ca20e39b120-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:13 [async_llm.py:261] Added request cmpl-4f0b556f55c04c0fa21a5ca20e39b120-0.
INFO 03-01 23:45:14 [logger.py:42] Received request cmpl-b5dffdb822114df8bbe3117c50a016cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:14 [async_llm.py:261] Added request cmpl-b5dffdb822114df8bbe3117c50a016cf-0.
INFO 03-01 23:45:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:15 [logger.py:42] Received request cmpl-a9ad535e128045a6aca9d7c2de6b47be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:15 [async_llm.py:261] Added request cmpl-a9ad535e128045a6aca9d7c2de6b47be-0.
INFO 03-01 23:45:17 [logger.py:42] Received request cmpl-396a90d673054372896f5191d79536cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:17 [async_llm.py:261] Added request cmpl-396a90d673054372896f5191d79536cc-0.
INFO 03-01 23:45:18 [logger.py:42] Received request cmpl-42a60136aca24a47983b62c51b02e2dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:18 [async_llm.py:261] Added request cmpl-42a60136aca24a47983b62c51b02e2dc-0.
INFO 03-01 23:45:19 [logger.py:42] Received request cmpl-2223b6f8244d468d94fe3914effef6cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:19 [async_llm.py:261] Added request cmpl-2223b6f8244d468d94fe3914effef6cc-0.
INFO 03-01 23:45:20 [logger.py:42] Received request cmpl-f9479e92821843efb35ab3382f358243-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:20 [async_llm.py:261] Added request cmpl-f9479e92821843efb35ab3382f358243-0.
INFO 03-01 23:45:21 [logger.py:42] Received request cmpl-4510d34b5fd54eeebb507cabc3c303f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:21 [async_llm.py:261] Added request cmpl-4510d34b5fd54eeebb507cabc3c303f1-0.
INFO 03-01 23:45:22 [logger.py:42] Received request cmpl-6a56fd0f8f4940fe9d85dfac92cb73d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:22 [async_llm.py:261] Added request cmpl-6a56fd0f8f4940fe9d85dfac92cb73d8-0.
INFO 03-01 23:45:23 [logger.py:42] Received request cmpl-eedaa58eb9504c779ceb8426daaf1934-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:23 [async_llm.py:261] Added request cmpl-eedaa58eb9504c779ceb8426daaf1934-0.
INFO 03-01 23:45:24 [logger.py:42] Received request cmpl-79c9c0735aa44d218fc3e8c141a2b1fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:24 [async_llm.py:261] Added request cmpl-79c9c0735aa44d218fc3e8c141a2b1fa-0.
INFO 03-01 23:45:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:25 [logger.py:42] Received request cmpl-03f734dc7ff34835ab8d9cf3d6269bc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:25 [async_llm.py:261] Added request cmpl-03f734dc7ff34835ab8d9cf3d6269bc1-0.
INFO 03-01 23:45:26 [logger.py:42] Received request cmpl-c9684951d4eb4f86a16b30d41d0d7b04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:26 [async_llm.py:261] Added request cmpl-c9684951d4eb4f86a16b30d41d0d7b04-0.
INFO 03-01 23:45:27 [logger.py:42] Received request cmpl-b078ecbe96cc4cd299b3d0a526ae99b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:27 [async_llm.py:261] Added request cmpl-b078ecbe96cc4cd299b3d0a526ae99b8-0.
INFO 03-01 23:45:29 [logger.py:42] Received request cmpl-7c6b46840b414238883e81356f358849-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:29 [async_llm.py:261] Added request cmpl-7c6b46840b414238883e81356f358849-0.
INFO 03-01 23:45:30 [logger.py:42] Received request cmpl-8a97097857a445d18b126264d309504e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:30 [async_llm.py:261] Added request cmpl-8a97097857a445d18b126264d309504e-0.
INFO 03-01 23:45:31 [logger.py:42] Received request cmpl-e26488babbb3482e87cb8b1f5c695a52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:31 [async_llm.py:261] Added request cmpl-e26488babbb3482e87cb8b1f5c695a52-0.
INFO 03-01 23:45:32 [logger.py:42] Received request cmpl-36f796fb03de4586bf0c8810899f3366-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:32 [async_llm.py:261] Added request cmpl-36f796fb03de4586bf0c8810899f3366-0.
INFO 03-01 23:45:33 [logger.py:42] Received request cmpl-90fd920503704a33b5e7e5035518d785-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:33 [async_llm.py:261] Added request cmpl-90fd920503704a33b5e7e5035518d785-0.
INFO 03-01 23:45:34 [logger.py:42] Received request cmpl-4dfb7f7a45d249ed93c79df1bbe95456-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:34 [async_llm.py:261] Added request cmpl-4dfb7f7a45d249ed93c79df1bbe95456-0.
INFO 03-01 23:45:35 [logger.py:42] Received request cmpl-065541981774471488e88c795e662314-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:35 [async_llm.py:261] Added request cmpl-065541981774471488e88c795e662314-0.
INFO 03-01 23:45:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:36 [logger.py:42] Received request cmpl-678fc5c3349249b48a1e7349161202dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:36 [async_llm.py:261] Added request cmpl-678fc5c3349249b48a1e7349161202dc-0.
INFO 03-01 23:45:37 [logger.py:42] Received request cmpl-f149c1cb0b594152b3aeb6197079c1d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:37 [async_llm.py:261] Added request cmpl-f149c1cb0b594152b3aeb6197079c1d7-0.
INFO 03-01 23:45:38 [logger.py:42] Received request cmpl-b8a0ba1dbd7245869df6b0efead593d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:38 [async_llm.py:261] Added request cmpl-b8a0ba1dbd7245869df6b0efead593d9-0.
INFO 03-01 23:45:39 [logger.py:42] Received request cmpl-df2b15d6767940ab99582154f42b6c0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:39 [async_llm.py:261] Added request cmpl-df2b15d6767940ab99582154f42b6c0c-0.
INFO 03-01 23:45:41 [logger.py:42] Received request cmpl-3a72f69470e64cb9b49aa31a49f98f1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:41 [async_llm.py:261] Added request cmpl-3a72f69470e64cb9b49aa31a49f98f1e-0.
INFO 03-01 23:45:42 [logger.py:42] Received request cmpl-63f6c3eebcde42a094134e6b658c4f1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:42 [async_llm.py:261] Added request cmpl-63f6c3eebcde42a094134e6b658c4f1a-0.
INFO 03-01 23:45:43 [logger.py:42] Received request cmpl-d6bb543d7c4948be9292d6f4c92f244f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:43 [async_llm.py:261] Added request cmpl-d6bb543d7c4948be9292d6f4c92f244f-0.
INFO 03-01 23:45:44 [logger.py:42] Received request cmpl-b11d03b05d7c4bb5924ef31ce41ab12e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:44 [async_llm.py:261] Added request cmpl-b11d03b05d7c4bb5924ef31ce41ab12e-0.
INFO 03-01 23:45:45 [logger.py:42] Received request cmpl-7f081e8f79ca41979910d16995672460-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:45 [async_llm.py:261] Added request cmpl-7f081e8f79ca41979910d16995672460-0.
INFO 03-01 23:45:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:46 [logger.py:42] Received request cmpl-eb51a71e0fb240efb9dae3ccb6b03aaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:46 [async_llm.py:261] Added request cmpl-eb51a71e0fb240efb9dae3ccb6b03aaa-0.
INFO 03-01 23:45:47 [logger.py:42] Received request cmpl-4baf2053d3d54ad4b9207fb26c03331f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:47 [async_llm.py:261] Added request cmpl-4baf2053d3d54ad4b9207fb26c03331f-0.
INFO 03-01 23:45:48 [logger.py:42] Received request cmpl-7d2933c291374ebab228b0b959e15e35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:48 [async_llm.py:261] Added request cmpl-7d2933c291374ebab228b0b959e15e35-0.
INFO 03-01 23:45:49 [logger.py:42] Received request cmpl-76f41cb1ac744e3cb51f6c1f4a162910-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:49 [async_llm.py:261] Added request cmpl-76f41cb1ac744e3cb51f6c1f4a162910-0.
INFO 03-01 23:45:50 [logger.py:42] Received request cmpl-b195d0b444574747bc29a6507541eea3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:50 [async_llm.py:261] Added request cmpl-b195d0b444574747bc29a6507541eea3-0.
INFO 03-01 23:45:52 [logger.py:42] Received request cmpl-d2bb88082b154872b786d4e15df1d20f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:52 [async_llm.py:261] Added request cmpl-d2bb88082b154872b786d4e15df1d20f-0.
INFO 03-01 23:45:53 [logger.py:42] Received request cmpl-1e81ce7d5f6b4e2299212277293f8721-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:53 [async_llm.py:261] Added request cmpl-1e81ce7d5f6b4e2299212277293f8721-0.
INFO 03-01 23:45:54 [logger.py:42] Received request cmpl-80320ec0b13948a284b89f6afcf8b4f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:54 [async_llm.py:261] Added request cmpl-80320ec0b13948a284b89f6afcf8b4f1-0.
INFO 03-01 23:45:55 [logger.py:42] Received request cmpl-b5c632899f6d442f84a998fc100cc241-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:55 [async_llm.py:261] Added request cmpl-b5c632899f6d442f84a998fc100cc241-0.
INFO 03-01 23:45:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:45:56 [logger.py:42] Received request cmpl-8b0040cff8c14c558d6db110603be10d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:56 [async_llm.py:261] Added request cmpl-8b0040cff8c14c558d6db110603be10d-0.
INFO 03-01 23:45:57 [logger.py:42] Received request cmpl-09ab4c08a54241989aaddb23faf1ec94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:57 [async_llm.py:261] Added request cmpl-09ab4c08a54241989aaddb23faf1ec94-0.
INFO 03-01 23:45:58 [logger.py:42] Received request cmpl-5dbe0c705c06478593789716bb0e71f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:58 [async_llm.py:261] Added request cmpl-5dbe0c705c06478593789716bb0e71f6-0.
INFO 03-01 23:45:59 [logger.py:42] Received request cmpl-5446f97f51f246f2b8131187971a4e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:45:59 [async_llm.py:261] Added request cmpl-5446f97f51f246f2b8131187971a4e83-0.
INFO 03-01 23:46:00 [logger.py:42] Received request cmpl-3177b83aa3e74c6cb6d1861e75668b7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:00 [async_llm.py:261] Added request cmpl-3177b83aa3e74c6cb6d1861e75668b7e-0.
INFO 03-01 23:46:01 [logger.py:42] Received request cmpl-fd7c0c1e0e954fa7b2504faa2370a572-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:01 [async_llm.py:261] Added request cmpl-fd7c0c1e0e954fa7b2504faa2370a572-0.
INFO 03-01 23:46:02 [logger.py:42] Received request cmpl-009bea30914542e7912bb7e39cf7cc6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:02 [async_llm.py:261] Added request cmpl-009bea30914542e7912bb7e39cf7cc6c-0.
INFO 03-01 23:46:04 [logger.py:42] Received request cmpl-1414e644875f4824a2835f84c7a67ce8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:04 [async_llm.py:261] Added request cmpl-1414e644875f4824a2835f84c7a67ce8-0.
INFO 03-01 23:46:05 [logger.py:42] Received request cmpl-7f452804e5844ea8a41ab83fdb255a9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:05 [async_llm.py:261] Added request cmpl-7f452804e5844ea8a41ab83fdb255a9a-0.
INFO 03-01 23:46:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:06 [logger.py:42] Received request cmpl-0dfc2ce45a374ac289c171132b2bcd1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:06 [async_llm.py:261] Added request cmpl-0dfc2ce45a374ac289c171132b2bcd1e-0.
INFO 03-01 23:46:07 [logger.py:42] Received request cmpl-02667d9d1fe542149988747f7aae1df7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:07 [async_llm.py:261] Added request cmpl-02667d9d1fe542149988747f7aae1df7-0.
INFO 03-01 23:46:08 [logger.py:42] Received request cmpl-4d1ea57a859149c78361e07ac05dd170-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:08 [async_llm.py:261] Added request cmpl-4d1ea57a859149c78361e07ac05dd170-0.
INFO 03-01 23:46:09 [logger.py:42] Received request cmpl-10b15fdf91e944a7b68e404ebf7ce2d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:09 [async_llm.py:261] Added request cmpl-10b15fdf91e944a7b68e404ebf7ce2d7-0.
INFO 03-01 23:46:10 [logger.py:42] Received request cmpl-e1e99a1fde0b4c06adca3cade9adfb52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:10 [async_llm.py:261] Added request cmpl-e1e99a1fde0b4c06adca3cade9adfb52-0.
INFO 03-01 23:46:11 [logger.py:42] Received request cmpl-34d10c3a77ef4d2082a15d16060bc363-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:11 [async_llm.py:261] Added request cmpl-34d10c3a77ef4d2082a15d16060bc363-0.
INFO 03-01 23:46:12 [logger.py:42] Received request cmpl-372854cf576546118f8bb786ab95cfc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:12 [async_llm.py:261] Added request cmpl-372854cf576546118f8bb786ab95cfc3-0.
INFO 03-01 23:46:13 [logger.py:42] Received request cmpl-4993696dedef4d68b0766ac8dcc5764d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:13 [async_llm.py:261] Added request cmpl-4993696dedef4d68b0766ac8dcc5764d-0.
INFO 03-01 23:46:14 [logger.py:42] Received request cmpl-f4263f42bccd4a33a6072db3e1febe79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:14 [async_llm.py:261] Added request cmpl-f4263f42bccd4a33a6072db3e1febe79-0.
INFO 03-01 23:46:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:16 [logger.py:42] Received request cmpl-9070306d5ed54c469f257deafd4f0677-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:16 [async_llm.py:261] Added request cmpl-9070306d5ed54c469f257deafd4f0677-0.
INFO 03-01 23:46:17 [logger.py:42] Received request cmpl-96420fcc059a4db28e7f3d596d121e7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:17 [async_llm.py:261] Added request cmpl-96420fcc059a4db28e7f3d596d121e7e-0.
INFO 03-01 23:46:18 [logger.py:42] Received request cmpl-1d1df2ecb43f491f90badb9f803d13fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:18 [async_llm.py:261] Added request cmpl-1d1df2ecb43f491f90badb9f803d13fc-0.
INFO 03-01 23:46:19 [logger.py:42] Received request cmpl-98da7d0508e748b094ddd93f20bd5ef9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:19 [async_llm.py:261] Added request cmpl-98da7d0508e748b094ddd93f20bd5ef9-0.
INFO 03-01 23:46:20 [logger.py:42] Received request cmpl-217df287ee4a48d5baf4616312c0b1fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:20 [async_llm.py:261] Added request cmpl-217df287ee4a48d5baf4616312c0b1fd-0.
INFO 03-01 23:46:21 [logger.py:42] Received request cmpl-f478e505125c463fa23457c107110d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:21 [async_llm.py:261] Added request cmpl-f478e505125c463fa23457c107110d16-0.
INFO 03-01 23:46:22 [logger.py:42] Received request cmpl-895066ddffe344708d2afe5024a48e97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:22 [async_llm.py:261] Added request cmpl-895066ddffe344708d2afe5024a48e97-0.
INFO 03-01 23:46:23 [logger.py:42] Received request cmpl-59b4bfe7f5fe4f9ebf861ae7d2766102-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:23 [async_llm.py:261] Added request cmpl-59b4bfe7f5fe4f9ebf861ae7d2766102-0.
INFO 03-01 23:46:24 [logger.py:42] Received request cmpl-18db31834a874bdf8efc76217261e3e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:24 [async_llm.py:261] Added request cmpl-18db31834a874bdf8efc76217261e3e3-0.
INFO 03-01 23:46:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:25 [logger.py:42] Received request cmpl-2d1e5dbdfda349c58b2061908f35456c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:25 [async_llm.py:261] Added request cmpl-2d1e5dbdfda349c58b2061908f35456c-0.
INFO 03-01 23:46:27 [logger.py:42] Received request cmpl-5be325b70abd43caa98ae1dedc7a74be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:27 [async_llm.py:261] Added request cmpl-5be325b70abd43caa98ae1dedc7a74be-0.
INFO 03-01 23:46:28 [logger.py:42] Received request cmpl-fd6d790a646a47e8ba436a5ddb0e6e97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:28 [async_llm.py:261] Added request cmpl-fd6d790a646a47e8ba436a5ddb0e6e97-0.
INFO 03-01 23:46:29 [logger.py:42] Received request cmpl-f111a29bcad8419f9e6b724b75d8532a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:29 [async_llm.py:261] Added request cmpl-f111a29bcad8419f9e6b724b75d8532a-0.
INFO 03-01 23:46:30 [logger.py:42] Received request cmpl-f34a8e2b3ba64c89a3581a96a23c3f44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:30 [async_llm.py:261] Added request cmpl-f34a8e2b3ba64c89a3581a96a23c3f44-0.
INFO 03-01 23:46:31 [logger.py:42] Received request cmpl-b90823bd23c04e71a2becd36c2989714-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:31 [async_llm.py:261] Added request cmpl-b90823bd23c04e71a2becd36c2989714-0.
INFO 03-01 23:46:32 [logger.py:42] Received request cmpl-a1123fc698444f73b7c596df5eb0bffc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:32 [async_llm.py:261] Added request cmpl-a1123fc698444f73b7c596df5eb0bffc-0.
INFO 03-01 23:46:33 [logger.py:42] Received request cmpl-571e53b124324d4e83087b9d0b885c85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:33 [async_llm.py:261] Added request cmpl-571e53b124324d4e83087b9d0b885c85-0.
INFO 03-01 23:46:34 [logger.py:42] Received request cmpl-019d97d442b44ce2a602822f1a450b3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:34 [async_llm.py:261] Added request cmpl-019d97d442b44ce2a602822f1a450b3f-0.
INFO 03-01 23:46:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:35 [logger.py:42] Received request cmpl-17bae9994baa4a068d0a2cef2a1be146-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:35 [async_llm.py:261] Added request cmpl-17bae9994baa4a068d0a2cef2a1be146-0.
INFO 03-01 23:46:36 [logger.py:42] Received request cmpl-d85b07e8efe74216ac58071b81e681e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:36 [async_llm.py:261] Added request cmpl-d85b07e8efe74216ac58071b81e681e0-0.
INFO 03-01 23:46:37 [logger.py:42] Received request cmpl-70a52c4d7a6341b6a932e77591e45433-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:37 [async_llm.py:261] Added request cmpl-70a52c4d7a6341b6a932e77591e45433-0.
INFO 03-01 23:46:39 [logger.py:42] Received request cmpl-329a4a5958cd49bf9bf8d2c107e97e24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:39 [async_llm.py:261] Added request cmpl-329a4a5958cd49bf9bf8d2c107e97e24-0.
INFO 03-01 23:46:40 [logger.py:42] Received request cmpl-270d4ee1a91c4f82bdb0544527cfa9bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:40 [async_llm.py:261] Added request cmpl-270d4ee1a91c4f82bdb0544527cfa9bb-0.
INFO 03-01 23:46:41 [logger.py:42] Received request cmpl-d97eba82c8a5458a80ea4e5cffed8506-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:41 [async_llm.py:261] Added request cmpl-d97eba82c8a5458a80ea4e5cffed8506-0.
INFO 03-01 23:46:42 [logger.py:42] Received request cmpl-36e8d232b6564df49e7e4c748c15c8e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:42 [async_llm.py:261] Added request cmpl-36e8d232b6564df49e7e4c748c15c8e5-0.
INFO 03-01 23:46:43 [logger.py:42] Received request cmpl-fb8bc363e9f648a891f4896ecd4f24c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:43 [async_llm.py:261] Added request cmpl-fb8bc363e9f648a891f4896ecd4f24c2-0.
INFO 03-01 23:46:44 [logger.py:42] Received request cmpl-442c3c00a55344c7a544705296029056-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:44 [async_llm.py:261] Added request cmpl-442c3c00a55344c7a544705296029056-0.
INFO 03-01 23:46:45 [logger.py:42] Received request cmpl-66d7e9c7aad248779fe2407a8e3d61b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:45 [async_llm.py:261] Added request cmpl-66d7e9c7aad248779fe2407a8e3d61b2-0.
INFO 03-01 23:46:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:46 [logger.py:42] Received request cmpl-615a4a8569154fbb8aa76f889eb1cc7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:46 [async_llm.py:261] Added request cmpl-615a4a8569154fbb8aa76f889eb1cc7c-0.
INFO 03-01 23:46:47 [logger.py:42] Received request cmpl-37a51b721372485aa40abce38ebc79b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:47 [async_llm.py:261] Added request cmpl-37a51b721372485aa40abce38ebc79b4-0.
INFO 03-01 23:46:48 [logger.py:42] Received request cmpl-62f5013a9c1a4100b31507ed95d3c378-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:48 [async_llm.py:261] Added request cmpl-62f5013a9c1a4100b31507ed95d3c378-0.
INFO 03-01 23:46:49 [logger.py:42] Received request cmpl-438a109032164ebc970a60964ce905a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:49 [async_llm.py:261] Added request cmpl-438a109032164ebc970a60964ce905a9-0.
INFO 03-01 23:46:51 [logger.py:42] Received request cmpl-1d021d41d0d84609b7fdeebbb00377f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:51 [async_llm.py:261] Added request cmpl-1d021d41d0d84609b7fdeebbb00377f8-0.
INFO 03-01 23:46:52 [logger.py:42] Received request cmpl-2d4dc555c4af4b999fc0fad3468aea4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:52 [async_llm.py:261] Added request cmpl-2d4dc555c4af4b999fc0fad3468aea4d-0.
INFO 03-01 23:46:53 [logger.py:42] Received request cmpl-f266012580634b6d96a2f249b1823697-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:53 [async_llm.py:261] Added request cmpl-f266012580634b6d96a2f249b1823697-0.
INFO 03-01 23:46:54 [logger.py:42] Received request cmpl-65c2bba43bfa4a02bd6078276cab344a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:54 [async_llm.py:261] Added request cmpl-65c2bba43bfa4a02bd6078276cab344a-0.
INFO 03-01 23:46:55 [logger.py:42] Received request cmpl-ddd943057d1f43c294c44fbd53e1f7bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:55 [async_llm.py:261] Added request cmpl-ddd943057d1f43c294c44fbd53e1f7bb-0.
INFO 03-01 23:46:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:46:56 [logger.py:42] Received request cmpl-b15caf6dba384b439df7613a04271ad4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:56 [async_llm.py:261] Added request cmpl-b15caf6dba384b439df7613a04271ad4-0.
INFO 03-01 23:46:57 [logger.py:42] Received request cmpl-7c351a2f6d024ffeb4d216a27c67a0b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:57 [async_llm.py:261] Added request cmpl-7c351a2f6d024ffeb4d216a27c67a0b6-0.
INFO 03-01 23:46:58 [logger.py:42] Received request cmpl-3be23d6c49d24c699646d72d238aa33d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:58 [async_llm.py:261] Added request cmpl-3be23d6c49d24c699646d72d238aa33d-0.
INFO 03-01 23:46:59 [logger.py:42] Received request cmpl-5ad72154b52d475c89666d43e33f45d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:46:59 [async_llm.py:261] Added request cmpl-5ad72154b52d475c89666d43e33f45d8-0.
INFO 03-01 23:47:00 [logger.py:42] Received request cmpl-407821b5658947efb3c416dc765dacb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:00 [async_llm.py:261] Added request cmpl-407821b5658947efb3c416dc765dacb4-0.
INFO 03-01 23:47:02 [logger.py:42] Received request cmpl-e3540dff8cc7411aa7f5291cbe57968e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:02 [async_llm.py:261] Added request cmpl-e3540dff8cc7411aa7f5291cbe57968e-0.
INFO 03-01 23:47:03 [logger.py:42] Received request cmpl-676da74e8fab4dad9ee4e7ed8ed3a4c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:03 [async_llm.py:261] Added request cmpl-676da74e8fab4dad9ee4e7ed8ed3a4c6-0.
INFO 03-01 23:47:04 [logger.py:42] Received request cmpl-64e6789918b34f108f666abcfc1a0df9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:04 [async_llm.py:261] Added request cmpl-64e6789918b34f108f666abcfc1a0df9-0.
INFO 03-01 23:47:05 [logger.py:42] Received request cmpl-a3b86b67ba0f41f69f60c35dd8e3e4ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:05 [async_llm.py:261] Added request cmpl-a3b86b67ba0f41f69f60c35dd8e3e4ae-0.
INFO 03-01 23:47:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:06 [logger.py:42] Received request cmpl-df05994ad2704f56825078ae428a5d1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:06 [async_llm.py:261] Added request cmpl-df05994ad2704f56825078ae428a5d1d-0.
INFO 03-01 23:47:07 [logger.py:42] Received request cmpl-7cc079fa7f914b06b54796a1ff233624-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:07 [async_llm.py:261] Added request cmpl-7cc079fa7f914b06b54796a1ff233624-0.
INFO 03-01 23:47:08 [logger.py:42] Received request cmpl-66d7a36b827c449487914fdb2e42d5ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:08 [async_llm.py:261] Added request cmpl-66d7a36b827c449487914fdb2e42d5ad-0.
INFO 03-01 23:47:09 [logger.py:42] Received request cmpl-d98fd7703a6e4cbf87e68b089c798d2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:09 [async_llm.py:261] Added request cmpl-d98fd7703a6e4cbf87e68b089c798d2f-0.
INFO 03-01 23:47:10 [logger.py:42] Received request cmpl-0e91a8c2c1b4410899a1faffdeefbed3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:10 [async_llm.py:261] Added request cmpl-0e91a8c2c1b4410899a1faffdeefbed3-0.
INFO 03-01 23:47:11 [logger.py:42] Received request cmpl-5d1ffb4c94da4ca9969d40a84e5b5386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:11 [async_llm.py:261] Added request cmpl-5d1ffb4c94da4ca9969d40a84e5b5386-0.
INFO 03-01 23:47:12 [logger.py:42] Received request cmpl-a8438ed3b0c642bcb940e7039a267673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:12 [async_llm.py:261] Added request cmpl-a8438ed3b0c642bcb940e7039a267673-0.
INFO 03-01 23:47:14 [logger.py:42] Received request cmpl-a938714a896c43dd8293d8290d59e917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:14 [async_llm.py:261] Added request cmpl-a938714a896c43dd8293d8290d59e917-0.
INFO 03-01 23:47:15 [logger.py:42] Received request cmpl-26a7a47d9c10419bb785c534acc6ac4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:15 [async_llm.py:261] Added request cmpl-26a7a47d9c10419bb785c534acc6ac4a-0.
INFO 03-01 23:47:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:16 [logger.py:42] Received request cmpl-b9bc6c51e7c6499aac1cb0317a72c3c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:16 [async_llm.py:261] Added request cmpl-b9bc6c51e7c6499aac1cb0317a72c3c0-0.
INFO 03-01 23:47:17 [logger.py:42] Received request cmpl-777d9e8454684d348bb1e4409139b2a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:17 [async_llm.py:261] Added request cmpl-777d9e8454684d348bb1e4409139b2a4-0.
INFO 03-01 23:47:18 [logger.py:42] Received request cmpl-ac98dbec44c8457d944a11e68f9842d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:18 [async_llm.py:261] Added request cmpl-ac98dbec44c8457d944a11e68f9842d1-0.
INFO 03-01 23:47:19 [logger.py:42] Received request cmpl-74fad846061c470fa30abd9ce22ca5d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:19 [async_llm.py:261] Added request cmpl-74fad846061c470fa30abd9ce22ca5d4-0.
INFO 03-01 23:47:20 [logger.py:42] Received request cmpl-bbdaf919f5fd4b2790e4e6672363890a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:20 [async_llm.py:261] Added request cmpl-bbdaf919f5fd4b2790e4e6672363890a-0.
INFO 03-01 23:47:21 [logger.py:42] Received request cmpl-dcf69d13cd98414ebd294f7041d6cb51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:21 [async_llm.py:261] Added request cmpl-dcf69d13cd98414ebd294f7041d6cb51-0.
INFO 03-01 23:47:22 [logger.py:42] Received request cmpl-61b3d038f7514e85926bb27d5bbb4814-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:22 [async_llm.py:261] Added request cmpl-61b3d038f7514e85926bb27d5bbb4814-0.
INFO 03-01 23:47:23 [logger.py:42] Received request cmpl-3f9400eaaeb44602b0d9c03bbb0bb540-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:23 [async_llm.py:261] Added request cmpl-3f9400eaaeb44602b0d9c03bbb0bb540-0.
INFO 03-01 23:47:25 [logger.py:42] Received request cmpl-390e44913ca542a780e3dad1606e1561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:25 [async_llm.py:261] Added request cmpl-390e44913ca542a780e3dad1606e1561-0.
INFO 03-01 23:47:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:26 [logger.py:42] Received request cmpl-6d47da819abf46fb853bd33315a79486-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:26 [async_llm.py:261] Added request cmpl-6d47da819abf46fb853bd33315a79486-0.
INFO 03-01 23:47:27 [logger.py:42] Received request cmpl-974b98d2e7a741f2b6a2c09d032d3d6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:27 [async_llm.py:261] Added request cmpl-974b98d2e7a741f2b6a2c09d032d3d6b-0.
INFO 03-01 23:47:28 [logger.py:42] Received request cmpl-09ac217a33504724b831bac1e57115b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:28 [async_llm.py:261] Added request cmpl-09ac217a33504724b831bac1e57115b4-0.
INFO 03-01 23:47:29 [logger.py:42] Received request cmpl-f0e34a65650941eb8f37a426f3272f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:29 [async_llm.py:261] Added request cmpl-f0e34a65650941eb8f37a426f3272f64-0.
INFO 03-01 23:47:30 [logger.py:42] Received request cmpl-afabe5bb121d483aa22d3c76ae36fff4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:30 [async_llm.py:261] Added request cmpl-afabe5bb121d483aa22d3c76ae36fff4-0.
INFO 03-01 23:47:31 [logger.py:42] Received request cmpl-87a13537c3f9494dad6da5e7747cf565-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:31 [async_llm.py:261] Added request cmpl-87a13537c3f9494dad6da5e7747cf565-0.
INFO 03-01 23:47:32 [logger.py:42] Received request cmpl-c5e8d9edafbe4d9b80ea01730dbabba0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:32 [async_llm.py:261] Added request cmpl-c5e8d9edafbe4d9b80ea01730dbabba0-0.
INFO 03-01 23:47:33 [logger.py:42] Received request cmpl-6953ac9ab990438d995e6305eec98efb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:33 [async_llm.py:261] Added request cmpl-6953ac9ab990438d995e6305eec98efb-0.
INFO 03-01 23:47:34 [logger.py:42] Received request cmpl-a6a7b3c5f909468bb942cc7dd45187b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:34 [async_llm.py:261] Added request cmpl-a6a7b3c5f909468bb942cc7dd45187b7-0.
INFO 03-01 23:47:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:35 [logger.py:42] Received request cmpl-d1276540a8d045e3bf965de318e0c64d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:35 [async_llm.py:261] Added request cmpl-d1276540a8d045e3bf965de318e0c64d-0.
INFO 03-01 23:47:37 [logger.py:42] Received request cmpl-41812b141eff4edd8d360b940221b20a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:37 [async_llm.py:261] Added request cmpl-41812b141eff4edd8d360b940221b20a-0.
INFO 03-01 23:47:38 [logger.py:42] Received request cmpl-3cabb569235f4c0b98bab53e0cbf5c95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:38 [async_llm.py:261] Added request cmpl-3cabb569235f4c0b98bab53e0cbf5c95-0.
INFO 03-01 23:47:39 [logger.py:42] Received request cmpl-c1ae46c275b54e7a9d612ae19cc0b979-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:39 [async_llm.py:261] Added request cmpl-c1ae46c275b54e7a9d612ae19cc0b979-0.
INFO 03-01 23:47:40 [logger.py:42] Received request cmpl-1a617599db5941849cf3b20034d27a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:40 [async_llm.py:261] Added request cmpl-1a617599db5941849cf3b20034d27a76-0.
INFO 03-01 23:47:41 [logger.py:42] Received request cmpl-ac9f06aced094e4c9c189909d95c360d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:41 [async_llm.py:261] Added request cmpl-ac9f06aced094e4c9c189909d95c360d-0.
INFO 03-01 23:47:42 [logger.py:42] Received request cmpl-b761044929604bc19a6cc49e38217bbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:42 [async_llm.py:261] Added request cmpl-b761044929604bc19a6cc49e38217bbb-0.
INFO 03-01 23:47:43 [logger.py:42] Received request cmpl-1cbb84ea36754928a0b471b6f70c567f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:43 [async_llm.py:261] Added request cmpl-1cbb84ea36754928a0b471b6f70c567f-0.
INFO 03-01 23:47:44 [logger.py:42] Received request cmpl-5a950a76e62f4625aec333ee378dfafc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:44 [async_llm.py:261] Added request cmpl-5a950a76e62f4625aec333ee378dfafc-0.
INFO 03-01 23:47:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:45 [logger.py:42] Received request cmpl-3773069067834e44844ae232243929ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:45 [async_llm.py:261] Added request cmpl-3773069067834e44844ae232243929ee-0.
INFO 03-01 23:47:46 [logger.py:42] Received request cmpl-923125a73ab14ee1a732a8bba969faef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:46 [async_llm.py:261] Added request cmpl-923125a73ab14ee1a732a8bba969faef-0.
INFO 03-01 23:47:47 [logger.py:42] Received request cmpl-eb3d9b4702e54aefa55a75dffdba16c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:47 [async_llm.py:261] Added request cmpl-eb3d9b4702e54aefa55a75dffdba16c7-0.
INFO 03-01 23:47:49 [logger.py:42] Received request cmpl-12c317e3c8b949739d12e3614e69d207-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:49 [async_llm.py:261] Added request cmpl-12c317e3c8b949739d12e3614e69d207-0.
INFO 03-01 23:47:50 [logger.py:42] Received request cmpl-2eeca9c8cc31421b98aca59cdb1f8ad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:50 [async_llm.py:261] Added request cmpl-2eeca9c8cc31421b98aca59cdb1f8ad6-0.
INFO 03-01 23:47:51 [logger.py:42] Received request cmpl-55cd4a2e27a144ac99c86feaba9cd4ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:51 [async_llm.py:261] Added request cmpl-55cd4a2e27a144ac99c86feaba9cd4ab-0.
INFO 03-01 23:47:52 [logger.py:42] Received request cmpl-afd3fa9451ec45ea8ddc5dbffa7af225-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:52 [async_llm.py:261] Added request cmpl-afd3fa9451ec45ea8ddc5dbffa7af225-0.
INFO 03-01 23:47:53 [logger.py:42] Received request cmpl-6df88040a7b348db8dc4502f8ebe20ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:53 [async_llm.py:261] Added request cmpl-6df88040a7b348db8dc4502f8ebe20ac-0.
INFO 03-01 23:47:54 [logger.py:42] Received request cmpl-32595a6ec1b442f3bf979550b5a740c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:54 [async_llm.py:261] Added request cmpl-32595a6ec1b442f3bf979550b5a740c6-0.
INFO 03-01 23:47:55 [logger.py:42] Received request cmpl-39e0eaef3d964a6bb98fb5965b4ff561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:55 [async_llm.py:261] Added request cmpl-39e0eaef3d964a6bb98fb5965b4ff561-0.
INFO 03-01 23:47:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:47:56 [logger.py:42] Received request cmpl-04223fe719df49a79e36fb3a17c1c6c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:56 [async_llm.py:261] Added request cmpl-04223fe719df49a79e36fb3a17c1c6c0-0.
INFO 03-01 23:47:57 [logger.py:42] Received request cmpl-6cdbb0e4afaa45c2a2798c59921293c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:57 [async_llm.py:261] Added request cmpl-6cdbb0e4afaa45c2a2798c59921293c3-0.
INFO 03-01 23:47:58 [logger.py:42] Received request cmpl-35ce5eea506e4677ac6ea57ed360069f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:47:58 [async_llm.py:261] Added request cmpl-35ce5eea506e4677ac6ea57ed360069f-0.
INFO 03-01 23:48:00 [logger.py:42] Received request cmpl-35e5e796688044499f62512c9a8999e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:00 [async_llm.py:261] Added request cmpl-35e5e796688044499f62512c9a8999e4-0.
INFO 03-01 23:48:01 [logger.py:42] Received request cmpl-e907df82884a4cb0802996c867b31adc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:01 [async_llm.py:261] Added request cmpl-e907df82884a4cb0802996c867b31adc-0.
INFO 03-01 23:48:02 [logger.py:42] Received request cmpl-5d99567737284ed1b3d55d52cdf6b192-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:02 [async_llm.py:261] Added request cmpl-5d99567737284ed1b3d55d52cdf6b192-0.
INFO 03-01 23:48:03 [logger.py:42] Received request cmpl-11a5dbcaf5784bdda58a0112ecb7e960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:03 [async_llm.py:261] Added request cmpl-11a5dbcaf5784bdda58a0112ecb7e960-0.
INFO 03-01 23:48:04 [logger.py:42] Received request cmpl-2629919a16d5494dba6df8420b53cc65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:04 [async_llm.py:261] Added request cmpl-2629919a16d5494dba6df8420b53cc65-0.
INFO 03-01 23:48:05 [logger.py:42] Received request cmpl-3c556cf712124f0fbff2d08c24c5313f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:05 [async_llm.py:261] Added request cmpl-3c556cf712124f0fbff2d08c24c5313f-0.
INFO 03-01 23:48:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:06 [logger.py:42] Received request cmpl-05d28fa7c65e4d6aa2be1ccb7bd0b796-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:06 [async_llm.py:261] Added request cmpl-05d28fa7c65e4d6aa2be1ccb7bd0b796-0.
INFO 03-01 23:48:07 [logger.py:42] Received request cmpl-ecd23b67af7144ae81083908617367a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:07 [async_llm.py:261] Added request cmpl-ecd23b67af7144ae81083908617367a1-0.
INFO 03-01 23:48:08 [logger.py:42] Received request cmpl-38528131d734427dad84f164232bf06b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:08 [async_llm.py:261] Added request cmpl-38528131d734427dad84f164232bf06b-0.
INFO 03-01 23:48:09 [logger.py:42] Received request cmpl-289f076168124b3080a1c85da95810c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:09 [async_llm.py:261] Added request cmpl-289f076168124b3080a1c85da95810c4-0.
INFO 03-01 23:48:10 [logger.py:42] Received request cmpl-313dbfc0dbf943ff944f4a766f1c8862-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:10 [async_llm.py:261] Added request cmpl-313dbfc0dbf943ff944f4a766f1c8862-0.
INFO 03-01 23:48:12 [logger.py:42] Received request cmpl-1328f30963c94ce08489fb47a72598a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:12 [async_llm.py:261] Added request cmpl-1328f30963c94ce08489fb47a72598a8-0.
INFO 03-01 23:48:13 [logger.py:42] Received request cmpl-93ee1974621c46918e3990e15cd75dd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:13 [async_llm.py:261] Added request cmpl-93ee1974621c46918e3990e15cd75dd7-0.
INFO 03-01 23:48:14 [logger.py:42] Received request cmpl-ec860b0054634f758d75816cd4ffc175-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:14 [async_llm.py:261] Added request cmpl-ec860b0054634f758d75816cd4ffc175-0.
INFO 03-01 23:48:15 [logger.py:42] Received request cmpl-481d575151824835a9a7713a0ab8ae84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:15 [async_llm.py:261] Added request cmpl-481d575151824835a9a7713a0ab8ae84-0.
INFO 03-01 23:48:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:16 [logger.py:42] Received request cmpl-212f8e070fa14afbbb7f190d3ddd7532-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:16 [async_llm.py:261] Added request cmpl-212f8e070fa14afbbb7f190d3ddd7532-0.
INFO 03-01 23:48:17 [logger.py:42] Received request cmpl-17002a2b5e49450c968b09d8d7153c2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:17 [async_llm.py:261] Added request cmpl-17002a2b5e49450c968b09d8d7153c2a-0.
INFO 03-01 23:48:18 [logger.py:42] Received request cmpl-3e376024e58f47a0bec384797b5d2c2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:18 [async_llm.py:261] Added request cmpl-3e376024e58f47a0bec384797b5d2c2a-0.
INFO 03-01 23:48:19 [logger.py:42] Received request cmpl-093cfb7754674a4597bec08b5be320e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:19 [async_llm.py:261] Added request cmpl-093cfb7754674a4597bec08b5be320e5-0.
INFO 03-01 23:48:20 [logger.py:42] Received request cmpl-6633d997ce4444bb8aecf5074de8c056-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:20 [async_llm.py:261] Added request cmpl-6633d997ce4444bb8aecf5074de8c056-0.
INFO 03-01 23:48:21 [logger.py:42] Received request cmpl-57f96047377f4405a0b81538f4da83ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:21 [async_llm.py:261] Added request cmpl-57f96047377f4405a0b81538f4da83ee-0.
INFO 03-01 23:48:23 [logger.py:42] Received request cmpl-c9bdc6406ff748fd85aaebefeec57f4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:23 [async_llm.py:261] Added request cmpl-c9bdc6406ff748fd85aaebefeec57f4e-0.
INFO 03-01 23:48:24 [logger.py:42] Received request cmpl-4f52a5f0be414d95a006dad7ef4f5f92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:24 [async_llm.py:261] Added request cmpl-4f52a5f0be414d95a006dad7ef4f5f92-0.
INFO 03-01 23:48:25 [logger.py:42] Received request cmpl-765fdca9a8d94daca032a33cae58e03e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:25 [async_llm.py:261] Added request cmpl-765fdca9a8d94daca032a33cae58e03e-0.
INFO 03-01 23:48:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:26 [logger.py:42] Received request cmpl-7b277dc81b664dd4835b11dee5498ceb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:26 [async_llm.py:261] Added request cmpl-7b277dc81b664dd4835b11dee5498ceb-0.
INFO 03-01 23:48:27 [logger.py:42] Received request cmpl-b1a854e40c0b4ede9f0ef35a7e1bcd74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:27 [async_llm.py:261] Added request cmpl-b1a854e40c0b4ede9f0ef35a7e1bcd74-0.
INFO 03-01 23:48:28 [logger.py:42] Received request cmpl-c2c6a9186a1b418193d14d43b348a85c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:28 [async_llm.py:261] Added request cmpl-c2c6a9186a1b418193d14d43b348a85c-0.
INFO 03-01 23:48:29 [logger.py:42] Received request cmpl-26ca763b01364ed49df5a33363d69ee6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:29 [async_llm.py:261] Added request cmpl-26ca763b01364ed49df5a33363d69ee6-0.
INFO 03-01 23:48:30 [logger.py:42] Received request cmpl-5e9b91119fb844c3b571a94da8d4af62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:30 [async_llm.py:261] Added request cmpl-5e9b91119fb844c3b571a94da8d4af62-0.
INFO 03-01 23:48:31 [logger.py:42] Received request cmpl-7c68b19e4432494a84ec063af36ca4b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:31 [async_llm.py:261] Added request cmpl-7c68b19e4432494a84ec063af36ca4b0-0.
INFO 03-01 23:48:32 [logger.py:42] Received request cmpl-498a467ae7934f8e8feec12a440a05d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:32 [async_llm.py:261] Added request cmpl-498a467ae7934f8e8feec12a440a05d9-0.
INFO 03-01 23:48:33 [logger.py:42] Received request cmpl-73ef3943caf646f88ca1d869f9ce6d8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:33 [async_llm.py:261] Added request cmpl-73ef3943caf646f88ca1d869f9ce6d8a-0.
INFO 03-01 23:48:35 [logger.py:42] Received request cmpl-6d1c932ddb664b42bd3551b67469cb3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:35 [async_llm.py:261] Added request cmpl-6d1c932ddb664b42bd3551b67469cb3d-0.
INFO 03-01 23:48:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:36 [logger.py:42] Received request cmpl-2d0eb8c52c1542078716755bc7e532e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:36 [async_llm.py:261] Added request cmpl-2d0eb8c52c1542078716755bc7e532e2-0.
INFO 03-01 23:48:37 [logger.py:42] Received request cmpl-7d8d22a86e0a414d9a74de6f3dd8abe1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:37 [async_llm.py:261] Added request cmpl-7d8d22a86e0a414d9a74de6f3dd8abe1-0.
INFO 03-01 23:48:38 [logger.py:42] Received request cmpl-4f4a875da34c4086a99301055b7820de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:38 [async_llm.py:261] Added request cmpl-4f4a875da34c4086a99301055b7820de-0.
INFO 03-01 23:48:39 [logger.py:42] Received request cmpl-f6259626476845f4919bbb9672a59932-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:39 [async_llm.py:261] Added request cmpl-f6259626476845f4919bbb9672a59932-0.
INFO 03-01 23:48:40 [logger.py:42] Received request cmpl-f9905da44c304230a821ffc2c30d6ca8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:40 [async_llm.py:261] Added request cmpl-f9905da44c304230a821ffc2c30d6ca8-0.
INFO 03-01 23:48:41 [logger.py:42] Received request cmpl-e778cd739d7845159ea2bdf37caa2d2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:41 [async_llm.py:261] Added request cmpl-e778cd739d7845159ea2bdf37caa2d2c-0.
INFO 03-01 23:48:42 [logger.py:42] Received request cmpl-35c8879c52364425a12e4aecc13f4c5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:42 [async_llm.py:261] Added request cmpl-35c8879c52364425a12e4aecc13f4c5d-0.
INFO 03-01 23:48:43 [logger.py:42] Received request cmpl-fa524182d0c74f64b797fd25921b0ea6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:43 [async_llm.py:261] Added request cmpl-fa524182d0c74f64b797fd25921b0ea6-0.
INFO 03-01 23:48:44 [logger.py:42] Received request cmpl-d777e45185604b8cba9923b3e0e4ea07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:44 [async_llm.py:261] Added request cmpl-d777e45185604b8cba9923b3e0e4ea07-0.
INFO 03-01 23:48:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:46 [logger.py:42] Received request cmpl-5688720a66c745fca0220664701ddb3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:46 [async_llm.py:261] Added request cmpl-5688720a66c745fca0220664701ddb3f-0.
INFO 03-01 23:48:47 [logger.py:42] Received request cmpl-ae7477c228414717ae52348d5003b6ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:47 [async_llm.py:261] Added request cmpl-ae7477c228414717ae52348d5003b6ee-0.
INFO 03-01 23:48:48 [logger.py:42] Received request cmpl-1bd350652c4b441f880eadc7cca90e13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:48 [async_llm.py:261] Added request cmpl-1bd350652c4b441f880eadc7cca90e13-0.
INFO 03-01 23:48:49 [logger.py:42] Received request cmpl-088dd3757a0846309076c5b6ba1cea01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:49 [async_llm.py:261] Added request cmpl-088dd3757a0846309076c5b6ba1cea01-0.
INFO 03-01 23:48:50 [logger.py:42] Received request cmpl-0e6c9d8f58b04c29a4a3a78a4ef37722-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:50 [async_llm.py:261] Added request cmpl-0e6c9d8f58b04c29a4a3a78a4ef37722-0.
INFO 03-01 23:48:51 [logger.py:42] Received request cmpl-6fc9bbab886f4f19a44ec079088e0dcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:51 [async_llm.py:261] Added request cmpl-6fc9bbab886f4f19a44ec079088e0dcc-0.
INFO 03-01 23:48:52 [logger.py:42] Received request cmpl-1ac994dc6e3741b5bf1dd13fde136357-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:52 [async_llm.py:261] Added request cmpl-1ac994dc6e3741b5bf1dd13fde136357-0.
INFO 03-01 23:48:53 [logger.py:42] Received request cmpl-11f119de47e44675b5c9bbf9d5f24b5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:53 [async_llm.py:261] Added request cmpl-11f119de47e44675b5c9bbf9d5f24b5a-0.
INFO 03-01 23:48:54 [logger.py:42] Received request cmpl-ca2042919194433ab0896271628656ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:54 [async_llm.py:261] Added request cmpl-ca2042919194433ab0896271628656ae-0.
INFO 03-01 23:48:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:48:55 [logger.py:42] Received request cmpl-8294deea99534b2abb09a8901fa40177-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:55 [async_llm.py:261] Added request cmpl-8294deea99534b2abb09a8901fa40177-0.
INFO 03-01 23:48:56 [logger.py:42] Received request cmpl-486cfa5dab064a6abb5dbbeabf67227e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:56 [async_llm.py:261] Added request cmpl-486cfa5dab064a6abb5dbbeabf67227e-0.
INFO 03-01 23:48:58 [logger.py:42] Received request cmpl-3dffd331b689445cbed3d01e4cdb069e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:58 [async_llm.py:261] Added request cmpl-3dffd331b689445cbed3d01e4cdb069e-0.
INFO 03-01 23:48:59 [logger.py:42] Received request cmpl-998fc6f33e2a4062b5b62a9d2ed7129d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:48:59 [async_llm.py:261] Added request cmpl-998fc6f33e2a4062b5b62a9d2ed7129d-0.
INFO 03-01 23:49:00 [logger.py:42] Received request cmpl-146f1aad17294ec9ab370ddc7d8461bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:00 [async_llm.py:261] Added request cmpl-146f1aad17294ec9ab370ddc7d8461bf-0.
INFO 03-01 23:49:01 [logger.py:42] Received request cmpl-492895b4617b4d12a3cd3c4fbe1a8108-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:01 [async_llm.py:261] Added request cmpl-492895b4617b4d12a3cd3c4fbe1a8108-0.
INFO 03-01 23:49:02 [logger.py:42] Received request cmpl-d0fced0ef7874c9e96db79f54c215ba3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:02 [async_llm.py:261] Added request cmpl-d0fced0ef7874c9e96db79f54c215ba3-0.
INFO 03-01 23:49:03 [logger.py:42] Received request cmpl-8d8cb054fd5849ceb3e45cd73b88aefe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:03 [async_llm.py:261] Added request cmpl-8d8cb054fd5849ceb3e45cd73b88aefe-0.
INFO 03-01 23:49:04 [logger.py:42] Received request cmpl-b60d69979b8e4feca5d5e9e994072b40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:04 [async_llm.py:261] Added request cmpl-b60d69979b8e4feca5d5e9e994072b40-0.
INFO 03-01 23:49:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:05 [logger.py:42] Received request cmpl-4e9404328afb4eab9ba3c7be39d7b5c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:05 [async_llm.py:261] Added request cmpl-4e9404328afb4eab9ba3c7be39d7b5c6-0.
INFO 03-01 23:49:06 [logger.py:42] Received request cmpl-ddaec3452f434aba86044bc1b4e55486-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:06 [async_llm.py:261] Added request cmpl-ddaec3452f434aba86044bc1b4e55486-0.
INFO 03-01 23:49:07 [logger.py:42] Received request cmpl-2ba5de6d62194f83a13e212a26a2b671-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:07 [async_llm.py:261] Added request cmpl-2ba5de6d62194f83a13e212a26a2b671-0.
INFO 03-01 23:49:09 [logger.py:42] Received request cmpl-8328a34e8d5b48b4872324c418921dbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:09 [async_llm.py:261] Added request cmpl-8328a34e8d5b48b4872324c418921dbd-0.
INFO 03-01 23:49:10 [logger.py:42] Received request cmpl-b3d96ce8f2fe4a8e8f7be7e993169dee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:10 [async_llm.py:261] Added request cmpl-b3d96ce8f2fe4a8e8f7be7e993169dee-0.
INFO 03-01 23:49:11 [logger.py:42] Received request cmpl-8b5d8bb40e794d29889f4e5523a783cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:11 [async_llm.py:261] Added request cmpl-8b5d8bb40e794d29889f4e5523a783cb-0.
INFO 03-01 23:49:12 [logger.py:42] Received request cmpl-90a57c198b594d56966fd284325a3dd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:12 [async_llm.py:261] Added request cmpl-90a57c198b594d56966fd284325a3dd3-0.
INFO 03-01 23:49:13 [logger.py:42] Received request cmpl-abe6ee3551864da6aa658e4c57284590-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:13 [async_llm.py:261] Added request cmpl-abe6ee3551864da6aa658e4c57284590-0.
INFO 03-01 23:49:14 [logger.py:42] Received request cmpl-1fac1690aaf8404a9b5b54d031d42409-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:14 [async_llm.py:261] Added request cmpl-1fac1690aaf8404a9b5b54d031d42409-0.
INFO 03-01 23:49:15 [logger.py:42] Received request cmpl-87ab7116b57940d5a3cfb29a04ce8d97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:15 [async_llm.py:261] Added request cmpl-87ab7116b57940d5a3cfb29a04ce8d97-0.
INFO 03-01 23:49:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:16 [logger.py:42] Received request cmpl-37bfe401e3514251940952ceeba558b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:16 [async_llm.py:261] Added request cmpl-37bfe401e3514251940952ceeba558b8-0.
INFO 03-01 23:49:17 [logger.py:42] Received request cmpl-17654f349ab64d789bca2d72ad1a74c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:17 [async_llm.py:261] Added request cmpl-17654f349ab64d789bca2d72ad1a74c0-0.
INFO 03-01 23:49:18 [logger.py:42] Received request cmpl-bb62d1c2f5154c189042c8ed1341a03f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:18 [async_llm.py:261] Added request cmpl-bb62d1c2f5154c189042c8ed1341a03f-0.
INFO 03-01 23:49:19 [logger.py:42] Received request cmpl-56b7f0a839c7460abe3e2c167b8d10dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:19 [async_llm.py:261] Added request cmpl-56b7f0a839c7460abe3e2c167b8d10dd-0.
INFO 03-01 23:49:21 [logger.py:42] Received request cmpl-40d56c026e264cc2909daa2b77c67c4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:21 [async_llm.py:261] Added request cmpl-40d56c026e264cc2909daa2b77c67c4c-0.
INFO 03-01 23:49:22 [logger.py:42] Received request cmpl-b9e5cfaef82f4248b60e409adba53493-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:22 [async_llm.py:261] Added request cmpl-b9e5cfaef82f4248b60e409adba53493-0.
INFO 03-01 23:49:23 [logger.py:42] Received request cmpl-fd38439372b040968ffd395d5d7754e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:23 [async_llm.py:261] Added request cmpl-fd38439372b040968ffd395d5d7754e2-0.
INFO 03-01 23:49:24 [logger.py:42] Received request cmpl-063878230a6b496f81619b14264a0c4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:24 [async_llm.py:261] Added request cmpl-063878230a6b496f81619b14264a0c4d-0.
INFO 03-01 23:49:25 [logger.py:42] Received request cmpl-5ea6d3fb3f204f929b5bca29140eba2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:25 [async_llm.py:261] Added request cmpl-5ea6d3fb3f204f929b5bca29140eba2b-0.
INFO 03-01 23:49:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:26 [logger.py:42] Received request cmpl-084a45709c514c2dbb6a324823b704a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:26 [async_llm.py:261] Added request cmpl-084a45709c514c2dbb6a324823b704a0-0.
INFO 03-01 23:49:27 [logger.py:42] Received request cmpl-3b9d249d73f64d069a2e992562b086e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:27 [async_llm.py:261] Added request cmpl-3b9d249d73f64d069a2e992562b086e0-0.
INFO 03-01 23:49:28 [logger.py:42] Received request cmpl-31fb855c2126457da643eaea0e366d07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:28 [async_llm.py:261] Added request cmpl-31fb855c2126457da643eaea0e366d07-0.
INFO 03-01 23:49:29 [logger.py:42] Received request cmpl-b4386d0560a749f19f2db2571dbdac11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:29 [async_llm.py:261] Added request cmpl-b4386d0560a749f19f2db2571dbdac11-0.
INFO 03-01 23:49:30 [logger.py:42] Received request cmpl-cdf415c47e984bb192e15909f8d04b0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:30 [async_llm.py:261] Added request cmpl-cdf415c47e984bb192e15909f8d04b0b-0.
INFO 03-01 23:49:31 [logger.py:42] Received request cmpl-3fe9a2f193e242008a6e5b16da008e36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:31 [async_llm.py:261] Added request cmpl-3fe9a2f193e242008a6e5b16da008e36-0.
INFO 03-01 23:49:33 [logger.py:42] Received request cmpl-8b08f4f92bff44e381e6fcd38be9bdb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:33 [async_llm.py:261] Added request cmpl-8b08f4f92bff44e381e6fcd38be9bdb5-0.
INFO 03-01 23:49:34 [logger.py:42] Received request cmpl-66f7ded28d5e4b1687b7cc2529aff860-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:34 [async_llm.py:261] Added request cmpl-66f7ded28d5e4b1687b7cc2529aff860-0.
INFO 03-01 23:49:35 [logger.py:42] Received request cmpl-adb7ac48a93c4ccea0e3306ea39ff127-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:35 [async_llm.py:261] Added request cmpl-adb7ac48a93c4ccea0e3306ea39ff127-0.
INFO 03-01 23:49:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:36 [logger.py:42] Received request cmpl-d8e783a907184dbab66593419640aa9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:36 [async_llm.py:261] Added request cmpl-d8e783a907184dbab66593419640aa9b-0.
INFO 03-01 23:49:37 [logger.py:42] Received request cmpl-4b5b2858adf94923ba0a26b3a07f1c32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:37 [async_llm.py:261] Added request cmpl-4b5b2858adf94923ba0a26b3a07f1c32-0.
INFO 03-01 23:49:38 [logger.py:42] Received request cmpl-fab35f95ce054496b785fe73066a83f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:38 [async_llm.py:261] Added request cmpl-fab35f95ce054496b785fe73066a83f1-0.
INFO 03-01 23:49:39 [logger.py:42] Received request cmpl-82c372d16a784035bdec7f81fc1405c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:39 [async_llm.py:261] Added request cmpl-82c372d16a784035bdec7f81fc1405c6-0.
INFO 03-01 23:49:40 [logger.py:42] Received request cmpl-9fabbcdfb7a642c3ac826ba3c5c1280f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:40 [async_llm.py:261] Added request cmpl-9fabbcdfb7a642c3ac826ba3c5c1280f-0.
INFO 03-01 23:49:41 [logger.py:42] Received request cmpl-6019905f08e445258901026beb69af09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:41 [async_llm.py:261] Added request cmpl-6019905f08e445258901026beb69af09-0.
INFO 03-01 23:49:42 [logger.py:42] Received request cmpl-206f75ed15604b11afda53f326f0350a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:42 [async_llm.py:261] Added request cmpl-206f75ed15604b11afda53f326f0350a-0.
INFO 03-01 23:49:44 [logger.py:42] Received request cmpl-2e9d6c880c67454596809498befb79af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:44 [async_llm.py:261] Added request cmpl-2e9d6c880c67454596809498befb79af-0.
INFO 03-01 23:49:45 [logger.py:42] Received request cmpl-cc6613e11db249f9a030fe024e0c632f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:45 [async_llm.py:261] Added request cmpl-cc6613e11db249f9a030fe024e0c632f-0.
INFO 03-01 23:49:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:46 [logger.py:42] Received request cmpl-c79c87e5c79240b7aa89a569e25fc8da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:46 [async_llm.py:261] Added request cmpl-c79c87e5c79240b7aa89a569e25fc8da-0.
INFO 03-01 23:49:47 [logger.py:42] Received request cmpl-07ee57dcfe9f444ebc655632c40f3c68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:47 [async_llm.py:261] Added request cmpl-07ee57dcfe9f444ebc655632c40f3c68-0.
INFO 03-01 23:49:48 [logger.py:42] Received request cmpl-db7e68ee21bc4d5586def59167732dd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:48 [async_llm.py:261] Added request cmpl-db7e68ee21bc4d5586def59167732dd4-0.
INFO 03-01 23:49:49 [logger.py:42] Received request cmpl-4fe25326731a46a2b713d9ecf406bc21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:49 [async_llm.py:261] Added request cmpl-4fe25326731a46a2b713d9ecf406bc21-0.
INFO 03-01 23:49:50 [logger.py:42] Received request cmpl-331bbe6985de475084ab02218c058efc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:50 [async_llm.py:261] Added request cmpl-331bbe6985de475084ab02218c058efc-0.
INFO 03-01 23:49:51 [logger.py:42] Received request cmpl-38ca06e8f5e14cbab571a76e2e59585d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:51 [async_llm.py:261] Added request cmpl-38ca06e8f5e14cbab571a76e2e59585d-0.
INFO 03-01 23:49:52 [logger.py:42] Received request cmpl-5357b29505474207b7b4aae78c40aa2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:52 [async_llm.py:261] Added request cmpl-5357b29505474207b7b4aae78c40aa2a-0.
INFO 03-01 23:49:53 [logger.py:42] Received request cmpl-56c0f8ec7de443bdb9c1da72566db9df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:53 [async_llm.py:261] Added request cmpl-56c0f8ec7de443bdb9c1da72566db9df-0.
INFO 03-01 23:49:54 [logger.py:42] Received request cmpl-3502753bd88543a99c7c3734bc3a921d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:54 [async_llm.py:261] Added request cmpl-3502753bd88543a99c7c3734bc3a921d-0.
INFO 03-01 23:49:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:49:56 [logger.py:42] Received request cmpl-eb660aa5b13942569be0c51a55111180-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:56 [async_llm.py:261] Added request cmpl-eb660aa5b13942569be0c51a55111180-0.
INFO 03-01 23:49:57 [logger.py:42] Received request cmpl-2ae2f161beea429cbc3e053e26949981-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:57 [async_llm.py:261] Added request cmpl-2ae2f161beea429cbc3e053e26949981-0.
INFO 03-01 23:49:58 [logger.py:42] Received request cmpl-b2727c9d247f4a03b340756c1fdfaf47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:58 [async_llm.py:261] Added request cmpl-b2727c9d247f4a03b340756c1fdfaf47-0.
INFO 03-01 23:49:59 [logger.py:42] Received request cmpl-eb73565b79934ca986f10a2f975e018e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:49:59 [async_llm.py:261] Added request cmpl-eb73565b79934ca986f10a2f975e018e-0.
INFO 03-01 23:50:00 [logger.py:42] Received request cmpl-ea6eba3bb31d4172841a0465c446f3c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:00 [async_llm.py:261] Added request cmpl-ea6eba3bb31d4172841a0465c446f3c9-0.
INFO 03-01 23:50:01 [logger.py:42] Received request cmpl-365b9c21e53f48bbbb23080ad9baa3d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:01 [async_llm.py:261] Added request cmpl-365b9c21e53f48bbbb23080ad9baa3d2-0.
INFO 03-01 23:50:02 [logger.py:42] Received request cmpl-cf9e66a963274522ae047e0b29c73b94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:02 [async_llm.py:261] Added request cmpl-cf9e66a963274522ae047e0b29c73b94-0.
INFO 03-01 23:50:03 [logger.py:42] Received request cmpl-f9ef65007cd84865bca58c574634dd95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:03 [async_llm.py:261] Added request cmpl-f9ef65007cd84865bca58c574634dd95-0.
INFO 03-01 23:50:04 [logger.py:42] Received request cmpl-772a0ffffb604b69aee031f5e63a7229-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:04 [async_llm.py:261] Added request cmpl-772a0ffffb604b69aee031f5e63a7229-0.
INFO 03-01 23:50:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:05 [logger.py:42] Received request cmpl-3f7a808ddac24225a674e46773000d67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:05 [async_llm.py:261] Added request cmpl-3f7a808ddac24225a674e46773000d67-0.
INFO 03-01 23:50:07 [logger.py:42] Received request cmpl-3ce0d91041f449d88036b31240c5710e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:07 [async_llm.py:261] Added request cmpl-3ce0d91041f449d88036b31240c5710e-0.
INFO 03-01 23:50:08 [logger.py:42] Received request cmpl-7a2f411666dd4d4d9b42ee81f4d1f9ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:08 [async_llm.py:261] Added request cmpl-7a2f411666dd4d4d9b42ee81f4d1f9ad-0.
INFO 03-01 23:50:09 [logger.py:42] Received request cmpl-311a3ccf8c3940ffb68f417a9b8fb805-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:09 [async_llm.py:261] Added request cmpl-311a3ccf8c3940ffb68f417a9b8fb805-0.
INFO 03-01 23:50:10 [logger.py:42] Received request cmpl-faf9386ccc4347fd854e911fdf4fa1f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:10 [async_llm.py:261] Added request cmpl-faf9386ccc4347fd854e911fdf4fa1f4-0.
INFO 03-01 23:50:11 [logger.py:42] Received request cmpl-d808eaede11a4a7c8d55ffd3eeb4a450-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:11 [async_llm.py:261] Added request cmpl-d808eaede11a4a7c8d55ffd3eeb4a450-0.
INFO 03-01 23:50:12 [logger.py:42] Received request cmpl-b9fb4458e64b42aaad380d94c09157e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:12 [async_llm.py:261] Added request cmpl-b9fb4458e64b42aaad380d94c09157e6-0.
INFO 03-01 23:50:13 [logger.py:42] Received request cmpl-1cc8eaef2f1548bdad52f35d6589c09b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:13 [async_llm.py:261] Added request cmpl-1cc8eaef2f1548bdad52f35d6589c09b-0.
INFO 03-01 23:50:14 [logger.py:42] Received request cmpl-0f53c11b4ae64160a8e16e064b4dc37b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:14 [async_llm.py:261] Added request cmpl-0f53c11b4ae64160a8e16e064b4dc37b-0.
INFO 03-01 23:50:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:15 [logger.py:42] Received request cmpl-4aa3b4e180004ab7acb03bb6ca59a513-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:15 [async_llm.py:261] Added request cmpl-4aa3b4e180004ab7acb03bb6ca59a513-0.
INFO 03-01 23:50:16 [logger.py:42] Received request cmpl-1daa36db65044f36b735384951203c18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:16 [async_llm.py:261] Added request cmpl-1daa36db65044f36b735384951203c18-0.
INFO 03-01 23:50:17 [logger.py:42] Received request cmpl-53f9f6c07fde4ac58892446b2ab707fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:17 [async_llm.py:261] Added request cmpl-53f9f6c07fde4ac58892446b2ab707fd-0.
INFO 03-01 23:50:19 [logger.py:42] Received request cmpl-5baf0804381841f08d2c27b87e467e5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:19 [async_llm.py:261] Added request cmpl-5baf0804381841f08d2c27b87e467e5f-0.
INFO 03-01 23:50:20 [logger.py:42] Received request cmpl-b843f4df496a42fa9b70c70ce10a43cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:20 [async_llm.py:261] Added request cmpl-b843f4df496a42fa9b70c70ce10a43cd-0.
INFO 03-01 23:50:21 [logger.py:42] Received request cmpl-b2f17131a8b740288aa12d1724a811c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:21 [async_llm.py:261] Added request cmpl-b2f17131a8b740288aa12d1724a811c7-0.
INFO 03-01 23:50:22 [logger.py:42] Received request cmpl-a387ddb6f737403381695480303a1c4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:22 [async_llm.py:261] Added request cmpl-a387ddb6f737403381695480303a1c4c-0.
INFO 03-01 23:50:23 [logger.py:42] Received request cmpl-ed1a5b2cf40d4bea87198cd590efc27b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:23 [async_llm.py:261] Added request cmpl-ed1a5b2cf40d4bea87198cd590efc27b-0.
INFO 03-01 23:50:24 [logger.py:42] Received request cmpl-a657c037275e47669148b482f1f9f63c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:24 [async_llm.py:261] Added request cmpl-a657c037275e47669148b482f1f9f63c-0.
INFO 03-01 23:50:25 [logger.py:42] Received request cmpl-ead33bfddb90482d8d9e2dc41493e465-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:25 [async_llm.py:261] Added request cmpl-ead33bfddb90482d8d9e2dc41493e465-0.
INFO 03-01 23:50:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:26 [logger.py:42] Received request cmpl-ce81bef7123a492baa8f5a0003b1e677-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:26 [async_llm.py:261] Added request cmpl-ce81bef7123a492baa8f5a0003b1e677-0.
INFO 03-01 23:50:27 [logger.py:42] Received request cmpl-a9dd39ec2a0b4f568345e63fe06fef1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:27 [async_llm.py:261] Added request cmpl-a9dd39ec2a0b4f568345e63fe06fef1b-0.
INFO 03-01 23:50:28 [logger.py:42] Received request cmpl-84bbb0913e6647d6abc874a7b85dc01f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:28 [async_llm.py:261] Added request cmpl-84bbb0913e6647d6abc874a7b85dc01f-0.
INFO 03-01 23:50:29 [logger.py:42] Received request cmpl-999bcd834ca8436e83fc495bf983aa8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:29 [async_llm.py:261] Added request cmpl-999bcd834ca8436e83fc495bf983aa8a-0.
INFO 03-01 23:50:31 [logger.py:42] Received request cmpl-daad8b9bf98448d6a29680d2e38de5d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:31 [async_llm.py:261] Added request cmpl-daad8b9bf98448d6a29680d2e38de5d4-0.
INFO 03-01 23:50:32 [logger.py:42] Received request cmpl-e054d204f2fe4683a84bf43175b2f6d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:32 [async_llm.py:261] Added request cmpl-e054d204f2fe4683a84bf43175b2f6d8-0.
INFO 03-01 23:50:33 [logger.py:42] Received request cmpl-f92f0a09fd1d47b19fa48bd83aa6ea85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:33 [async_llm.py:261] Added request cmpl-f92f0a09fd1d47b19fa48bd83aa6ea85-0.
INFO 03-01 23:50:34 [logger.py:42] Received request cmpl-d549f9f36f4f4d3eb80e5bab88eb089d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:34 [async_llm.py:261] Added request cmpl-d549f9f36f4f4d3eb80e5bab88eb089d-0.
INFO 03-01 23:50:35 [logger.py:42] Received request cmpl-1542b8d5864b41cd945c777e01b4e2b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:35 [async_llm.py:261] Added request cmpl-1542b8d5864b41cd945c777e01b4e2b9-0.
INFO 03-01 23:50:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:36 [logger.py:42] Received request cmpl-33c88f3dfa5c47338c0b137441b8899c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:36 [async_llm.py:261] Added request cmpl-33c88f3dfa5c47338c0b137441b8899c-0.
INFO 03-01 23:50:37 [logger.py:42] Received request cmpl-98e4c76e4eb64aa689db1fcd2b682e85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:37 [async_llm.py:261] Added request cmpl-98e4c76e4eb64aa689db1fcd2b682e85-0.
INFO 03-01 23:50:38 [logger.py:42] Received request cmpl-905876758dcc40ad84d9bbe94b9c1d7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:38 [async_llm.py:261] Added request cmpl-905876758dcc40ad84d9bbe94b9c1d7d-0.
INFO 03-01 23:50:39 [logger.py:42] Received request cmpl-76d132722a154b169c5f5089f24eb164-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:39 [async_llm.py:261] Added request cmpl-76d132722a154b169c5f5089f24eb164-0.
INFO 03-01 23:50:40 [logger.py:42] Received request cmpl-090bed9f15174337820730d511e7c4ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:40 [async_llm.py:261] Added request cmpl-090bed9f15174337820730d511e7c4ee-0.
INFO 03-01 23:50:41 [logger.py:42] Received request cmpl-34f7539f3db34a288c5ed9273737a3bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:41 [async_llm.py:261] Added request cmpl-34f7539f3db34a288c5ed9273737a3bb-0.
INFO 03-01 23:50:43 [logger.py:42] Received request cmpl-41e4af73e1df4a8fa1067f25ca52aa3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:43 [async_llm.py:261] Added request cmpl-41e4af73e1df4a8fa1067f25ca52aa3d-0.
INFO 03-01 23:50:44 [logger.py:42] Received request cmpl-273a2a8844e54e7f9b74ffc7600efecd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:44 [async_llm.py:261] Added request cmpl-273a2a8844e54e7f9b74ffc7600efecd-0.
INFO 03-01 23:50:45 [logger.py:42] Received request cmpl-c450cf1b290947219b2226e339eece23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:45 [async_llm.py:261] Added request cmpl-c450cf1b290947219b2226e339eece23-0.
INFO 03-01 23:50:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:46 [logger.py:42] Received request cmpl-3da3fc517ec543f0a9b3a139877c3116-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:46 [async_llm.py:261] Added request cmpl-3da3fc517ec543f0a9b3a139877c3116-0.
INFO 03-01 23:50:47 [logger.py:42] Received request cmpl-edec9280324846309115494f186e773b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:47 [async_llm.py:261] Added request cmpl-edec9280324846309115494f186e773b-0.
INFO 03-01 23:50:48 [logger.py:42] Received request cmpl-31f7aa6c3d6d4ac89532b1e496f916f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:48 [async_llm.py:261] Added request cmpl-31f7aa6c3d6d4ac89532b1e496f916f3-0.
INFO 03-01 23:50:49 [logger.py:42] Received request cmpl-ac5c47d5c6734d2fa189012fe80feaa5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:49 [async_llm.py:261] Added request cmpl-ac5c47d5c6734d2fa189012fe80feaa5-0.
INFO 03-01 23:50:50 [logger.py:42] Received request cmpl-0b34f3bc7cf04ee9a324f98eb2f280b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:50 [async_llm.py:261] Added request cmpl-0b34f3bc7cf04ee9a324f98eb2f280b9-0.
INFO 03-01 23:50:51 [logger.py:42] Received request cmpl-5a7f0d33a517409d981fdecdc624f171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:51 [async_llm.py:261] Added request cmpl-5a7f0d33a517409d981fdecdc624f171-0.
INFO 03-01 23:50:52 [logger.py:42] Received request cmpl-1ead695527b54bde8645c22948b7c502-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:52 [async_llm.py:261] Added request cmpl-1ead695527b54bde8645c22948b7c502-0.
INFO 03-01 23:50:54 [logger.py:42] Received request cmpl-d3f05dcf5a714b30a73c67ae0143d19f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:54 [async_llm.py:261] Added request cmpl-d3f05dcf5a714b30a73c67ae0143d19f-0.
INFO 03-01 23:50:55 [logger.py:42] Received request cmpl-13ea7a9a371641eeadbc5bb425967651-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:55 [async_llm.py:261] Added request cmpl-13ea7a9a371641eeadbc5bb425967651-0.
INFO 03-01 23:50:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:50:56 [logger.py:42] Received request cmpl-ceba3d295e2c4c56bae3c8084fea7e50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:56 [async_llm.py:261] Added request cmpl-ceba3d295e2c4c56bae3c8084fea7e50-0.
INFO 03-01 23:50:57 [logger.py:42] Received request cmpl-13275e356c17406da6bb850a6b9c9513-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:57 [async_llm.py:261] Added request cmpl-13275e356c17406da6bb850a6b9c9513-0.
INFO 03-01 23:50:58 [logger.py:42] Received request cmpl-e4edc6ab1ab84b1da80e1d6ea74fcf70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:58 [async_llm.py:261] Added request cmpl-e4edc6ab1ab84b1da80e1d6ea74fcf70-0.
INFO 03-01 23:50:59 [logger.py:42] Received request cmpl-087ba689b52b4b2b9b260edaf7312a45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:50:59 [async_llm.py:261] Added request cmpl-087ba689b52b4b2b9b260edaf7312a45-0.
INFO 03-01 23:51:00 [logger.py:42] Received request cmpl-eca5c399a8de41f9bc8f4ad049158045-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:00 [async_llm.py:261] Added request cmpl-eca5c399a8de41f9bc8f4ad049158045-0.
INFO 03-01 23:51:01 [logger.py:42] Received request cmpl-3528ab68252a4922ad0fd68d59a76d1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:01 [async_llm.py:261] Added request cmpl-3528ab68252a4922ad0fd68d59a76d1d-0.
INFO 03-01 23:51:02 [logger.py:42] Received request cmpl-5f39eb029e2b4cf2ae9c927caa93ccdf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:02 [async_llm.py:261] Added request cmpl-5f39eb029e2b4cf2ae9c927caa93ccdf-0.
INFO 03-01 23:51:03 [logger.py:42] Received request cmpl-2c235019e69e4fe18c80d73352ee9793-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:03 [async_llm.py:261] Added request cmpl-2c235019e69e4fe18c80d73352ee9793-0.
INFO 03-01 23:51:04 [logger.py:42] Received request cmpl-6b79ce6eb8a945e791b161d0c108f400-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:04 [async_llm.py:261] Added request cmpl-6b79ce6eb8a945e791b161d0c108f400-0.
INFO 03-01 23:51:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:06 [logger.py:42] Received request cmpl-301f0e711caa4a0cabd8338ce33323e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:06 [async_llm.py:261] Added request cmpl-301f0e711caa4a0cabd8338ce33323e8-0.
INFO 03-01 23:51:07 [logger.py:42] Received request cmpl-fc82c5310d5d4eb3af5942a8adb6e97f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:07 [async_llm.py:261] Added request cmpl-fc82c5310d5d4eb3af5942a8adb6e97f-0.
INFO 03-01 23:51:08 [logger.py:42] Received request cmpl-853b4494799f42d0af4bdc5e11049edb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:08 [async_llm.py:261] Added request cmpl-853b4494799f42d0af4bdc5e11049edb-0.
INFO 03-01 23:51:09 [logger.py:42] Received request cmpl-7d951d12b3414ce2a0e6821a18362f49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:09 [async_llm.py:261] Added request cmpl-7d951d12b3414ce2a0e6821a18362f49-0.
INFO 03-01 23:51:10 [logger.py:42] Received request cmpl-c26c3375de4b44cd986014fabb7f5f4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:10 [async_llm.py:261] Added request cmpl-c26c3375de4b44cd986014fabb7f5f4b-0.
INFO 03-01 23:51:11 [logger.py:42] Received request cmpl-4ee0dd1d5200466b993cad92e52b95fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:11 [async_llm.py:261] Added request cmpl-4ee0dd1d5200466b993cad92e52b95fb-0.
INFO 03-01 23:51:12 [logger.py:42] Received request cmpl-98f5dd84f7ce4a108382152e376bd945-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:12 [async_llm.py:261] Added request cmpl-98f5dd84f7ce4a108382152e376bd945-0.
INFO 03-01 23:51:13 [logger.py:42] Received request cmpl-0c9dde11b3a94199a96ec9b143192deb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:13 [async_llm.py:261] Added request cmpl-0c9dde11b3a94199a96ec9b143192deb-0.
INFO 03-01 23:51:14 [logger.py:42] Received request cmpl-3cfd2c016cb94508b6931b9917fe982a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:14 [async_llm.py:261] Added request cmpl-3cfd2c016cb94508b6931b9917fe982a-0.
INFO 03-01 23:51:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:15 [logger.py:42] Received request cmpl-033d5f49744546a4b602453da23726b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:15 [async_llm.py:261] Added request cmpl-033d5f49744546a4b602453da23726b8-0.
INFO 03-01 23:51:16 [logger.py:42] Received request cmpl-3472050f52e3498cb99a4cc58525f204-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:16 [async_llm.py:261] Added request cmpl-3472050f52e3498cb99a4cc58525f204-0.
INFO 03-01 23:51:18 [logger.py:42] Received request cmpl-01bc34ef5cc94fdab3ad8002f973a11f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:18 [async_llm.py:261] Added request cmpl-01bc34ef5cc94fdab3ad8002f973a11f-0.
INFO 03-01 23:51:19 [logger.py:42] Received request cmpl-bfddbc9993484f49bfbeae8c48f391df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:19 [async_llm.py:261] Added request cmpl-bfddbc9993484f49bfbeae8c48f391df-0.
INFO 03-01 23:51:20 [logger.py:42] Received request cmpl-d6ac4a48a794432499a792cd5481c1fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:20 [async_llm.py:261] Added request cmpl-d6ac4a48a794432499a792cd5481c1fb-0.
INFO 03-01 23:51:21 [logger.py:42] Received request cmpl-f6413b551deb413299ab58019b286158-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:21 [async_llm.py:261] Added request cmpl-f6413b551deb413299ab58019b286158-0.
INFO 03-01 23:51:22 [logger.py:42] Received request cmpl-186fa794ffec4af4885a8831b0f03d75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:22 [async_llm.py:261] Added request cmpl-186fa794ffec4af4885a8831b0f03d75-0.
INFO 03-01 23:51:23 [logger.py:42] Received request cmpl-ee15b09bc1ff4422b58a4e82c9b2c74a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:23 [async_llm.py:261] Added request cmpl-ee15b09bc1ff4422b58a4e82c9b2c74a-0.
INFO 03-01 23:51:24 [logger.py:42] Received request cmpl-2b0a744713914a7baac9da9ed470dfc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:24 [async_llm.py:261] Added request cmpl-2b0a744713914a7baac9da9ed470dfc7-0.
INFO 03-01 23:51:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:25 [logger.py:42] Received request cmpl-7a848be52ae74f4e9ee209097a0a42ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:25 [async_llm.py:261] Added request cmpl-7a848be52ae74f4e9ee209097a0a42ea-0.
INFO 03-01 23:51:26 [logger.py:42] Received request cmpl-02587bc880cf4a1b8413288049ab8c41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:26 [async_llm.py:261] Added request cmpl-02587bc880cf4a1b8413288049ab8c41-0.
INFO 03-01 23:51:27 [logger.py:42] Received request cmpl-e3f03f611f3942b0bedf49ac2d25545e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:27 [async_llm.py:261] Added request cmpl-e3f03f611f3942b0bedf49ac2d25545e-0.
INFO 03-01 23:51:29 [logger.py:42] Received request cmpl-943d5e28545a4ef09b422d2072d64203-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:29 [async_llm.py:261] Added request cmpl-943d5e28545a4ef09b422d2072d64203-0.
INFO 03-01 23:51:30 [logger.py:42] Received request cmpl-9112093e4962408896bafde9f928883d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:30 [async_llm.py:261] Added request cmpl-9112093e4962408896bafde9f928883d-0.
INFO 03-01 23:51:31 [logger.py:42] Received request cmpl-bfd5fea951fa4e3599edfdebf5d30df7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:31 [async_llm.py:261] Added request cmpl-bfd5fea951fa4e3599edfdebf5d30df7-0.
INFO 03-01 23:51:32 [logger.py:42] Received request cmpl-3b05576f7858422f807601b60c9cf338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:32 [async_llm.py:261] Added request cmpl-3b05576f7858422f807601b60c9cf338-0.
INFO 03-01 23:51:33 [logger.py:42] Received request cmpl-d05572fb7e22413590bf1be55ec37ed8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:33 [async_llm.py:261] Added request cmpl-d05572fb7e22413590bf1be55ec37ed8-0.
INFO 03-01 23:51:34 [logger.py:42] Received request cmpl-b9ca4a23e9514f92a243a7bc7e0bc0db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:34 [async_llm.py:261] Added request cmpl-b9ca4a23e9514f92a243a7bc7e0bc0db-0.
INFO 03-01 23:51:35 [logger.py:42] Received request cmpl-1f80986c43834bec9ec4a6ef17c27406-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:35 [async_llm.py:261] Added request cmpl-1f80986c43834bec9ec4a6ef17c27406-0.
INFO 03-01 23:51:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:36 [logger.py:42] Received request cmpl-04a64cf41fa0476b8003048c09fb0a06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:36 [async_llm.py:261] Added request cmpl-04a64cf41fa0476b8003048c09fb0a06-0.
INFO 03-01 23:51:37 [logger.py:42] Received request cmpl-8b67732745734694a381af366acc3cd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:37 [async_llm.py:261] Added request cmpl-8b67732745734694a381af366acc3cd3-0.
INFO 03-01 23:51:38 [logger.py:42] Received request cmpl-7107dbd3d8284330b809fa0c1ab97d5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:38 [async_llm.py:261] Added request cmpl-7107dbd3d8284330b809fa0c1ab97d5b-0.
INFO 03-01 23:51:39 [logger.py:42] Received request cmpl-8a9236f1dcae4e52ae8101f99847e3ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:39 [async_llm.py:261] Added request cmpl-8a9236f1dcae4e52ae8101f99847e3ec-0.
INFO 03-01 23:51:41 [logger.py:42] Received request cmpl-3eb641e02fdc40c1b64638078b70a26c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:41 [async_llm.py:261] Added request cmpl-3eb641e02fdc40c1b64638078b70a26c-0.
INFO 03-01 23:51:42 [logger.py:42] Received request cmpl-ab9172bf435342a5986848b46594dcf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:42 [async_llm.py:261] Added request cmpl-ab9172bf435342a5986848b46594dcf1-0.
INFO 03-01 23:51:43 [logger.py:42] Received request cmpl-39728180263840cf818ed924557ff593-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:43 [async_llm.py:261] Added request cmpl-39728180263840cf818ed924557ff593-0.
INFO 03-01 23:51:44 [logger.py:42] Received request cmpl-60b82c638c8642b9bc9ea42c650c6fef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:44 [async_llm.py:261] Added request cmpl-60b82c638c8642b9bc9ea42c650c6fef-0.
INFO 03-01 23:51:45 [logger.py:42] Received request cmpl-85147b92966642cc8bd3004a7b6afb80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:45 [async_llm.py:261] Added request cmpl-85147b92966642cc8bd3004a7b6afb80-0.
INFO 03-01 23:51:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:46 [logger.py:42] Received request cmpl-107fa823c8534e25834c223a62dfc2e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:46 [async_llm.py:261] Added request cmpl-107fa823c8534e25834c223a62dfc2e2-0.
INFO 03-01 23:51:47 [logger.py:42] Received request cmpl-53b0067731c04d4a89fd2093eebc541b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:47 [async_llm.py:261] Added request cmpl-53b0067731c04d4a89fd2093eebc541b-0.
INFO 03-01 23:51:48 [logger.py:42] Received request cmpl-8706a9440bf1424b937036ed06a24bd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:48 [async_llm.py:261] Added request cmpl-8706a9440bf1424b937036ed06a24bd1-0.
INFO 03-01 23:51:49 [logger.py:42] Received request cmpl-01d5b41c95a243b9a8d22f7095f9ca53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:49 [async_llm.py:261] Added request cmpl-01d5b41c95a243b9a8d22f7095f9ca53-0.
INFO 03-01 23:51:50 [logger.py:42] Received request cmpl-5e565fc821e843e8b800fff06289fdf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:50 [async_llm.py:261] Added request cmpl-5e565fc821e843e8b800fff06289fdf1-0.
INFO 03-01 23:51:52 [logger.py:42] Received request cmpl-80a39ebe059940cd86e5135b0752efb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:52 [async_llm.py:261] Added request cmpl-80a39ebe059940cd86e5135b0752efb3-0.
INFO 03-01 23:51:53 [logger.py:42] Received request cmpl-fc9ad447258445df91282557af20333a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:53 [async_llm.py:261] Added request cmpl-fc9ad447258445df91282557af20333a-0.
INFO 03-01 23:51:54 [logger.py:42] Received request cmpl-67513d975f234a3cb4cdd1ce7949b95e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:54 [async_llm.py:261] Added request cmpl-67513d975f234a3cb4cdd1ce7949b95e-0.
INFO 03-01 23:51:55 [logger.py:42] Received request cmpl-e7328aa6c2c040c3b6953bd7d06b8165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:55 [async_llm.py:261] Added request cmpl-e7328aa6c2c040c3b6953bd7d06b8165-0.
INFO 03-01 23:51:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:51:56 [logger.py:42] Received request cmpl-36fa8d58db2247579ca960f8885f90d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:56 [async_llm.py:261] Added request cmpl-36fa8d58db2247579ca960f8885f90d5-0.
INFO 03-01 23:51:57 [logger.py:42] Received request cmpl-00a4120a3b91464bb77385ead8ac482d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:57 [async_llm.py:261] Added request cmpl-00a4120a3b91464bb77385ead8ac482d-0.
INFO 03-01 23:51:58 [logger.py:42] Received request cmpl-e26cd0d2a6264444b5004ad8a3bb73ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:58 [async_llm.py:261] Added request cmpl-e26cd0d2a6264444b5004ad8a3bb73ca-0.
INFO 03-01 23:51:59 [logger.py:42] Received request cmpl-51b47a20d247403b849f71a1d17f08ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:51:59 [async_llm.py:261] Added request cmpl-51b47a20d247403b849f71a1d17f08ff-0.
INFO 03-01 23:52:00 [logger.py:42] Received request cmpl-49183db96327449a873b7ce7c2abb528-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:00 [async_llm.py:261] Added request cmpl-49183db96327449a873b7ce7c2abb528-0.
INFO 03-01 23:52:01 [logger.py:42] Received request cmpl-186602c191a148b98a2e8f4871ec8d6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:01 [async_llm.py:261] Added request cmpl-186602c191a148b98a2e8f4871ec8d6a-0.
INFO 03-01 23:52:02 [logger.py:42] Received request cmpl-fb0c5b0a7e324447b94272546fa8eeac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:02 [async_llm.py:261] Added request cmpl-fb0c5b0a7e324447b94272546fa8eeac-0.
INFO 03-01 23:52:04 [logger.py:42] Received request cmpl-527222ce748648a2a76f15588ee000fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:04 [async_llm.py:261] Added request cmpl-527222ce748648a2a76f15588ee000fb-0.
INFO 03-01 23:52:05 [logger.py:42] Received request cmpl-f102f5ff43d24923903386221f689be2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:05 [async_llm.py:261] Added request cmpl-f102f5ff43d24923903386221f689be2-0.
INFO 03-01 23:52:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:06 [logger.py:42] Received request cmpl-7d5b61658bbd445788c79473e809242c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:06 [async_llm.py:261] Added request cmpl-7d5b61658bbd445788c79473e809242c-0.
INFO 03-01 23:52:07 [logger.py:42] Received request cmpl-e2a078529d4f4b54a14ea65d68b343aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:07 [async_llm.py:261] Added request cmpl-e2a078529d4f4b54a14ea65d68b343aa-0.
INFO 03-01 23:52:08 [logger.py:42] Received request cmpl-7a6b73c4f3974f2ca2f13fd0a98dd8b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:08 [async_llm.py:261] Added request cmpl-7a6b73c4f3974f2ca2f13fd0a98dd8b9-0.
INFO 03-01 23:52:09 [logger.py:42] Received request cmpl-ab5f8a635aab451a9ef6e5269aa8a2b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:09 [async_llm.py:261] Added request cmpl-ab5f8a635aab451a9ef6e5269aa8a2b0-0.
INFO 03-01 23:52:10 [logger.py:42] Received request cmpl-f9c8b5b785b74cb0b8cb11cce1d5d6e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:10 [async_llm.py:261] Added request cmpl-f9c8b5b785b74cb0b8cb11cce1d5d6e9-0.
INFO 03-01 23:52:11 [logger.py:42] Received request cmpl-4e3de870fee047ceb8748685e0c037a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:11 [async_llm.py:261] Added request cmpl-4e3de870fee047ceb8748685e0c037a9-0.
INFO 03-01 23:52:12 [logger.py:42] Received request cmpl-1eefdbf615424233af7975a5d5b784a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:12 [async_llm.py:261] Added request cmpl-1eefdbf615424233af7975a5d5b784a3-0.
INFO 03-01 23:52:13 [logger.py:42] Received request cmpl-37fac0f2b45740b79456d48c56c74578-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:13 [async_llm.py:261] Added request cmpl-37fac0f2b45740b79456d48c56c74578-0.
INFO 03-01 23:52:14 [logger.py:42] Received request cmpl-bd5f345a5f014b048823e1c9cfd1d722-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:14 [async_llm.py:261] Added request cmpl-bd5f345a5f014b048823e1c9cfd1d722-0.
INFO 03-01 23:52:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:16 [logger.py:42] Received request cmpl-36d18def49344488b3461edb294b6ee8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:16 [async_llm.py:261] Added request cmpl-36d18def49344488b3461edb294b6ee8-0.
INFO 03-01 23:52:17 [logger.py:42] Received request cmpl-604846aad7c6486a80a8d9bd429f0843-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:17 [async_llm.py:261] Added request cmpl-604846aad7c6486a80a8d9bd429f0843-0.
INFO 03-01 23:52:18 [logger.py:42] Received request cmpl-70ffe7dc72d44634b4ea0fbb2280d3ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:18 [async_llm.py:261] Added request cmpl-70ffe7dc72d44634b4ea0fbb2280d3ef-0.
INFO 03-01 23:52:19 [logger.py:42] Received request cmpl-4be97b2acfb84f43beb28376c31104ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:19 [async_llm.py:261] Added request cmpl-4be97b2acfb84f43beb28376c31104ce-0.
INFO 03-01 23:52:20 [logger.py:42] Received request cmpl-a694d7574ccc4ff8bac8f7fbc9be2756-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:20 [async_llm.py:261] Added request cmpl-a694d7574ccc4ff8bac8f7fbc9be2756-0.
INFO 03-01 23:52:21 [logger.py:42] Received request cmpl-e44b6209ca7146bd93a6e9a696a87ac5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:21 [async_llm.py:261] Added request cmpl-e44b6209ca7146bd93a6e9a696a87ac5-0.
INFO 03-01 23:52:22 [logger.py:42] Received request cmpl-d35245c81f474833a33f1d8580d0ea45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:22 [async_llm.py:261] Added request cmpl-d35245c81f474833a33f1d8580d0ea45-0.
INFO 03-01 23:52:23 [logger.py:42] Received request cmpl-2bffaa455c334d2db91fa643281d67a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:23 [async_llm.py:261] Added request cmpl-2bffaa455c334d2db91fa643281d67a8-0.
INFO 03-01 23:52:24 [logger.py:42] Received request cmpl-1582663cc49e4de6b63ae5c12a63d8d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:24 [async_llm.py:261] Added request cmpl-1582663cc49e4de6b63ae5c12a63d8d5-0.
INFO 03-01 23:52:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:25 [logger.py:42] Received request cmpl-0658db7348ad43c092d734aad3d3fa39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:25 [async_llm.py:261] Added request cmpl-0658db7348ad43c092d734aad3d3fa39-0.
INFO 03-01 23:52:27 [logger.py:42] Received request cmpl-47e914ea39294e3b8b8f0ade474e734a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:27 [async_llm.py:261] Added request cmpl-47e914ea39294e3b8b8f0ade474e734a-0.
INFO 03-01 23:52:28 [logger.py:42] Received request cmpl-4629ba4866f140d4ab3dca9bc8c11174-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:28 [async_llm.py:261] Added request cmpl-4629ba4866f140d4ab3dca9bc8c11174-0.
INFO 03-01 23:52:29 [logger.py:42] Received request cmpl-7eece6b3284b4d8dbf6e7cc77674e5e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:29 [async_llm.py:261] Added request cmpl-7eece6b3284b4d8dbf6e7cc77674e5e3-0.
INFO 03-01 23:52:30 [logger.py:42] Received request cmpl-5c3d3af9085943c89e2598488f0537fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:30 [async_llm.py:261] Added request cmpl-5c3d3af9085943c89e2598488f0537fc-0.
INFO 03-01 23:52:31 [logger.py:42] Received request cmpl-349952ada1b04691b89adcae8bd3ca0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:31 [async_llm.py:261] Added request cmpl-349952ada1b04691b89adcae8bd3ca0f-0.
INFO 03-01 23:52:32 [logger.py:42] Received request cmpl-33ce6d4700424e8f81d6ea9e987ab8d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:32 [async_llm.py:261] Added request cmpl-33ce6d4700424e8f81d6ea9e987ab8d7-0.
INFO 03-01 23:52:33 [logger.py:42] Received request cmpl-f6f146bce4b440eda20d63e0410e4228-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:33 [async_llm.py:261] Added request cmpl-f6f146bce4b440eda20d63e0410e4228-0.
INFO 03-01 23:52:34 [logger.py:42] Received request cmpl-ad421598f0954600937bb5f713a0fd11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:34 [async_llm.py:261] Added request cmpl-ad421598f0954600937bb5f713a0fd11-0.
INFO 03-01 23:52:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:35 [logger.py:42] Received request cmpl-d473e2c7dc7b4cc6b2f09ac254558a5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:35 [async_llm.py:261] Added request cmpl-d473e2c7dc7b4cc6b2f09ac254558a5b-0.
INFO 03-01 23:52:36 [logger.py:42] Received request cmpl-c69e90818c384864aeab7d7f769fcd33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:36 [async_llm.py:261] Added request cmpl-c69e90818c384864aeab7d7f769fcd33-0.
INFO 03-01 23:52:37 [logger.py:42] Received request cmpl-817d453132ce49efa09c7ae8b31274d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:37 [async_llm.py:261] Added request cmpl-817d453132ce49efa09c7ae8b31274d5-0.
INFO 03-01 23:52:39 [logger.py:42] Received request cmpl-ff59053b10134994b9f98faa137f2cd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:39 [async_llm.py:261] Added request cmpl-ff59053b10134994b9f98faa137f2cd7-0.
INFO 03-01 23:52:40 [logger.py:42] Received request cmpl-9f95811a27b24d5b9028bc6a70f8298c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:40 [async_llm.py:261] Added request cmpl-9f95811a27b24d5b9028bc6a70f8298c-0.
INFO 03-01 23:52:41 [logger.py:42] Received request cmpl-e4ce8d654998489d98ccd269c76ed2f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:41 [async_llm.py:261] Added request cmpl-e4ce8d654998489d98ccd269c76ed2f3-0.
INFO 03-01 23:52:42 [logger.py:42] Received request cmpl-263ece2c2a5f4cf68cec89f83d7989ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:42 [async_llm.py:261] Added request cmpl-263ece2c2a5f4cf68cec89f83d7989ff-0.
INFO 03-01 23:52:43 [logger.py:42] Received request cmpl-1187c880b6a6472b99cf216bdd0f645a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:43 [async_llm.py:261] Added request cmpl-1187c880b6a6472b99cf216bdd0f645a-0.
INFO 03-01 23:52:44 [logger.py:42] Received request cmpl-078dda6987da4dd8a26048ae2b0772ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:44 [async_llm.py:261] Added request cmpl-078dda6987da4dd8a26048ae2b0772ac-0.
INFO 03-01 23:52:45 [logger.py:42] Received request cmpl-772e92211ed14aed8a7b349f2691dfd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:45 [async_llm.py:261] Added request cmpl-772e92211ed14aed8a7b349f2691dfd4-0.
INFO 03-01 23:52:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:46 [logger.py:42] Received request cmpl-681856a929f34cb88369b0c6a5ff1c67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:46 [async_llm.py:261] Added request cmpl-681856a929f34cb88369b0c6a5ff1c67-0.
INFO 03-01 23:52:47 [logger.py:42] Received request cmpl-b62b10ce87f7426299532956cbaa7768-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:47 [async_llm.py:261] Added request cmpl-b62b10ce87f7426299532956cbaa7768-0.
INFO 03-01 23:52:48 [logger.py:42] Received request cmpl-eb24b878403a4962949167860674226e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:48 [async_llm.py:261] Added request cmpl-eb24b878403a4962949167860674226e-0.
INFO 03-01 23:52:50 [logger.py:42] Received request cmpl-02d917c20dd8469ca28b9ba765826107-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:50 [async_llm.py:261] Added request cmpl-02d917c20dd8469ca28b9ba765826107-0.
INFO 03-01 23:52:51 [logger.py:42] Received request cmpl-c216a49c46cd409aa38ef23a4135e34d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:51 [async_llm.py:261] Added request cmpl-c216a49c46cd409aa38ef23a4135e34d-0.
INFO 03-01 23:52:52 [logger.py:42] Received request cmpl-e50850df98d24b90ba93fa3269478ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:52 [async_llm.py:261] Added request cmpl-e50850df98d24b90ba93fa3269478ddd-0.
INFO 03-01 23:52:53 [logger.py:42] Received request cmpl-3632513f6e584c139f24b3e72362e6a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:53 [async_llm.py:261] Added request cmpl-3632513f6e584c139f24b3e72362e6a6-0.
INFO 03-01 23:52:54 [logger.py:42] Received request cmpl-65a9c308d36642c3b7bcc457c24d399a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:54 [async_llm.py:261] Added request cmpl-65a9c308d36642c3b7bcc457c24d399a-0.
INFO 03-01 23:52:55 [logger.py:42] Received request cmpl-b404041fd31248eaa3900278a78e26df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:55 [async_llm.py:261] Added request cmpl-b404041fd31248eaa3900278a78e26df-0.
INFO 03-01 23:52:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:52:56 [logger.py:42] Received request cmpl-b1970856b14a483bb96243abf150a025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:56 [async_llm.py:261] Added request cmpl-b1970856b14a483bb96243abf150a025-0.
INFO 03-01 23:52:57 [logger.py:42] Received request cmpl-d8333fa1fbe842b092b4b91279da3cd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:57 [async_llm.py:261] Added request cmpl-d8333fa1fbe842b092b4b91279da3cd5-0.
INFO 03-01 23:52:58 [logger.py:42] Received request cmpl-db0fab9c047b4952967272339086d44f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:58 [async_llm.py:261] Added request cmpl-db0fab9c047b4952967272339086d44f-0.
INFO 03-01 23:52:59 [logger.py:42] Received request cmpl-76afd75fb9b24537b77b2edef96449e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:52:59 [async_llm.py:261] Added request cmpl-76afd75fb9b24537b77b2edef96449e4-0.
INFO 03-01 23:53:00 [logger.py:42] Received request cmpl-8a431ed5d2e64d46aa382fd7fb7e39bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:00 [async_llm.py:261] Added request cmpl-8a431ed5d2e64d46aa382fd7fb7e39bb-0.
INFO 03-01 23:53:02 [logger.py:42] Received request cmpl-b357974841374076bca5cba8423bb2a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:02 [async_llm.py:261] Added request cmpl-b357974841374076bca5cba8423bb2a1-0.
INFO 03-01 23:53:03 [logger.py:42] Received request cmpl-1f2f4fa6a75945b68dd799f2c44db001-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:03 [async_llm.py:261] Added request cmpl-1f2f4fa6a75945b68dd799f2c44db001-0.
INFO 03-01 23:53:04 [logger.py:42] Received request cmpl-8e3c8b78be11426b8d136b42f15ef796-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:04 [async_llm.py:261] Added request cmpl-8e3c8b78be11426b8d136b42f15ef796-0.
INFO 03-01 23:53:05 [logger.py:42] Received request cmpl-a82d367d24fd4ee6a8790b755597c45e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:05 [async_llm.py:261] Added request cmpl-a82d367d24fd4ee6a8790b755597c45e-0.
INFO 03-01 23:53:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:06 [logger.py:42] Received request cmpl-7fb3434c0e6b432cae0429575c76810b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:06 [async_llm.py:261] Added request cmpl-7fb3434c0e6b432cae0429575c76810b-0.
INFO 03-01 23:53:07 [logger.py:42] Received request cmpl-42a4b26accc14f7cb9033fcde08329bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:07 [async_llm.py:261] Added request cmpl-42a4b26accc14f7cb9033fcde08329bf-0.
INFO 03-01 23:53:08 [logger.py:42] Received request cmpl-f80a0c5994c449b383dd882a42727769-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:08 [async_llm.py:261] Added request cmpl-f80a0c5994c449b383dd882a42727769-0.
INFO 03-01 23:53:09 [logger.py:42] Received request cmpl-40671ce4a54c437c85108127b577e48f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:09 [async_llm.py:261] Added request cmpl-40671ce4a54c437c85108127b577e48f-0.
INFO 03-01 23:53:10 [logger.py:42] Received request cmpl-9bcacd873e87493197f78058675b0d24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:10 [async_llm.py:261] Added request cmpl-9bcacd873e87493197f78058675b0d24-0.
INFO 03-01 23:53:11 [logger.py:42] Received request cmpl-f6f57e84cd9446ab980cfcaedfbdb1b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:11 [async_llm.py:261] Added request cmpl-f6f57e84cd9446ab980cfcaedfbdb1b1-0.
INFO 03-01 23:53:13 [logger.py:42] Received request cmpl-588052627573464fa51acc44ebe28ec7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:13 [async_llm.py:261] Added request cmpl-588052627573464fa51acc44ebe28ec7-0.
INFO 03-01 23:53:14 [logger.py:42] Received request cmpl-a9b0f17ba693476c8490375d8be385cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:14 [async_llm.py:261] Added request cmpl-a9b0f17ba693476c8490375d8be385cb-0.
INFO 03-01 23:53:15 [logger.py:42] Received request cmpl-e808874375d14098a133d727584d2cc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:15 [async_llm.py:261] Added request cmpl-e808874375d14098a133d727584d2cc3-0.
INFO 03-01 23:53:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:16 [logger.py:42] Received request cmpl-c64909644fb941088182680d19763faf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:16 [async_llm.py:261] Added request cmpl-c64909644fb941088182680d19763faf-0.
INFO 03-01 23:53:17 [logger.py:42] Received request cmpl-5706d95fa6d54e9082e671b4a3ae99e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:17 [async_llm.py:261] Added request cmpl-5706d95fa6d54e9082e671b4a3ae99e7-0.
INFO 03-01 23:53:18 [logger.py:42] Received request cmpl-bb29b64ff16d45ca9a8267609ed31322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:18 [async_llm.py:261] Added request cmpl-bb29b64ff16d45ca9a8267609ed31322-0.
INFO 03-01 23:53:19 [logger.py:42] Received request cmpl-685bbc8fb52a4e4e83f784c4f2cd8b91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:19 [async_llm.py:261] Added request cmpl-685bbc8fb52a4e4e83f784c4f2cd8b91-0.
INFO 03-01 23:53:20 [logger.py:42] Received request cmpl-3452c6819fba45fd940ca3380f5347cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:20 [async_llm.py:261] Added request cmpl-3452c6819fba45fd940ca3380f5347cf-0.
INFO 03-01 23:53:21 [logger.py:42] Received request cmpl-9090c2582ac840a7857b0d623faa27e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:21 [async_llm.py:261] Added request cmpl-9090c2582ac840a7857b0d623faa27e2-0.
INFO 03-01 23:53:22 [logger.py:42] Received request cmpl-8f55399b1b184e25b80227ff72d3b481-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:22 [async_llm.py:261] Added request cmpl-8f55399b1b184e25b80227ff72d3b481-0.
INFO 03-01 23:53:23 [logger.py:42] Received request cmpl-cbf2f59c723c4a83b8bcf6ff89e79d70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:23 [async_llm.py:261] Added request cmpl-cbf2f59c723c4a83b8bcf6ff89e79d70-0.
INFO 03-01 23:53:25 [logger.py:42] Received request cmpl-46d8c73fa96e4a509f32027a6304dd63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:25 [async_llm.py:261] Added request cmpl-46d8c73fa96e4a509f32027a6304dd63-0.
INFO 03-01 23:53:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:26 [logger.py:42] Received request cmpl-765a1efac94c4f0db8febb869de50d28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:26 [async_llm.py:261] Added request cmpl-765a1efac94c4f0db8febb869de50d28-0.
INFO 03-01 23:53:27 [logger.py:42] Received request cmpl-89cf9b9dbb5542bd845e2bcc95003e07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:27 [async_llm.py:261] Added request cmpl-89cf9b9dbb5542bd845e2bcc95003e07-0.
INFO 03-01 23:53:28 [logger.py:42] Received request cmpl-ce1d8702a1d24dcba3f1fbfe5c3e73a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:28 [async_llm.py:261] Added request cmpl-ce1d8702a1d24dcba3f1fbfe5c3e73a7-0.
INFO 03-01 23:53:29 [logger.py:42] Received request cmpl-60a1aa5c35da4f39a18fb3a5854abe7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:29 [async_llm.py:261] Added request cmpl-60a1aa5c35da4f39a18fb3a5854abe7f-0.
INFO 03-01 23:53:30 [logger.py:42] Received request cmpl-2889308870d7453f92bfe62a3264664d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:30 [async_llm.py:261] Added request cmpl-2889308870d7453f92bfe62a3264664d-0.
INFO 03-01 23:53:31 [logger.py:42] Received request cmpl-1d41612a87cd4551a338027b2192e861-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:31 [async_llm.py:261] Added request cmpl-1d41612a87cd4551a338027b2192e861-0.
INFO 03-01 23:53:32 [logger.py:42] Received request cmpl-b85e18e5af4f4a7eb68ca8508843e8f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:32 [async_llm.py:261] Added request cmpl-b85e18e5af4f4a7eb68ca8508843e8f6-0.
INFO 03-01 23:53:33 [logger.py:42] Received request cmpl-0cd5cff91d8a49a29a44f0710f8f4c36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:33 [async_llm.py:261] Added request cmpl-0cd5cff91d8a49a29a44f0710f8f4c36-0.
INFO 03-01 23:53:34 [logger.py:42] Received request cmpl-8a87389b36434b7e8bacdc2f30e11121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:34 [async_llm.py:261] Added request cmpl-8a87389b36434b7e8bacdc2f30e11121-0.
INFO 03-01 23:53:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:35 [logger.py:42] Received request cmpl-eecdc86ea7464b5e9ab112af7fdbd113-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:35 [async_llm.py:261] Added request cmpl-eecdc86ea7464b5e9ab112af7fdbd113-0.
INFO 03-01 23:53:37 [logger.py:42] Received request cmpl-3f10e7ce670a4502a3b8139ba75dcffa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:37 [async_llm.py:261] Added request cmpl-3f10e7ce670a4502a3b8139ba75dcffa-0.
INFO 03-01 23:53:38 [logger.py:42] Received request cmpl-19eab5aeb82a4276ae2e08c42744236b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:38 [async_llm.py:261] Added request cmpl-19eab5aeb82a4276ae2e08c42744236b-0.
INFO 03-01 23:53:39 [logger.py:42] Received request cmpl-acf5c54fbf864b429b62dfb2d2603e28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:39 [async_llm.py:261] Added request cmpl-acf5c54fbf864b429b62dfb2d2603e28-0.
INFO 03-01 23:53:40 [logger.py:42] Received request cmpl-7f262efa06ff4914a1dba2e77d0155af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:40 [async_llm.py:261] Added request cmpl-7f262efa06ff4914a1dba2e77d0155af-0.
INFO 03-01 23:53:41 [logger.py:42] Received request cmpl-eb09d245bd9d4f0b8fdb40d9ee336676-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:41 [async_llm.py:261] Added request cmpl-eb09d245bd9d4f0b8fdb40d9ee336676-0.
INFO 03-01 23:53:42 [logger.py:42] Received request cmpl-a14af1165abb419cbedd6dd548d89284-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:42 [async_llm.py:261] Added request cmpl-a14af1165abb419cbedd6dd548d89284-0.
INFO 03-01 23:53:43 [logger.py:42] Received request cmpl-f4f61538e8084f958047b0d0d83c42c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:43 [async_llm.py:261] Added request cmpl-f4f61538e8084f958047b0d0d83c42c2-0.
INFO 03-01 23:53:44 [logger.py:42] Received request cmpl-38b7261601aa4854b8bc8f4184c96270-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:44 [async_llm.py:261] Added request cmpl-38b7261601aa4854b8bc8f4184c96270-0.
INFO 03-01 23:53:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:45 [logger.py:42] Received request cmpl-7f3311acbb104b25a9ec12aeb78b422e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:45 [async_llm.py:261] Added request cmpl-7f3311acbb104b25a9ec12aeb78b422e-0.
INFO 03-01 23:53:46 [logger.py:42] Received request cmpl-ea846ede640f45969ac100bbc0f524f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:46 [async_llm.py:261] Added request cmpl-ea846ede640f45969ac100bbc0f524f2-0.
INFO 03-01 23:53:48 [logger.py:42] Received request cmpl-fa18bd490a21491385d9a9262f1f5b4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:48 [async_llm.py:261] Added request cmpl-fa18bd490a21491385d9a9262f1f5b4e-0.
INFO 03-01 23:53:49 [logger.py:42] Received request cmpl-bab47a9bd57a408a8fe144c66084643c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:49 [async_llm.py:261] Added request cmpl-bab47a9bd57a408a8fe144c66084643c-0.
INFO 03-01 23:53:50 [logger.py:42] Received request cmpl-cdca5816c05642c0b09af30c0b373960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:50 [async_llm.py:261] Added request cmpl-cdca5816c05642c0b09af30c0b373960-0.
INFO 03-01 23:53:51 [logger.py:42] Received request cmpl-5a0e1b4feb164f55901a7801e1d00103-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:51 [async_llm.py:261] Added request cmpl-5a0e1b4feb164f55901a7801e1d00103-0.
INFO 03-01 23:53:52 [logger.py:42] Received request cmpl-aa68130f2c154f71a426336ab29cdb3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:52 [async_llm.py:261] Added request cmpl-aa68130f2c154f71a426336ab29cdb3a-0.
INFO 03-01 23:53:53 [logger.py:42] Received request cmpl-86c16e609c554c5c86676549037046ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:53 [async_llm.py:261] Added request cmpl-86c16e609c554c5c86676549037046ed-0.
INFO 03-01 23:53:54 [logger.py:42] Received request cmpl-4a88384812c242f9a4049934982f402e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:54 [async_llm.py:261] Added request cmpl-4a88384812c242f9a4049934982f402e-0.
INFO 03-01 23:53:55 [logger.py:42] Received request cmpl-7924d376fdd64a18aaba21f53afec551-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:55 [async_llm.py:261] Added request cmpl-7924d376fdd64a18aaba21f53afec551-0.
INFO 03-01 23:53:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:53:56 [logger.py:42] Received request cmpl-e3d9acd8c49f45c5b1d259c23b6bb6e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:56 [async_llm.py:261] Added request cmpl-e3d9acd8c49f45c5b1d259c23b6bb6e6-0.
INFO 03-01 23:53:57 [logger.py:42] Received request cmpl-6839109b402e4997bedfac75470117e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:57 [async_llm.py:261] Added request cmpl-6839109b402e4997bedfac75470117e7-0.
INFO 03-01 23:53:58 [logger.py:42] Received request cmpl-f1b3a5a606544bdb95d01b1fe638e195-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:53:58 [async_llm.py:261] Added request cmpl-f1b3a5a606544bdb95d01b1fe638e195-0.
INFO 03-01 23:54:00 [logger.py:42] Received request cmpl-f35fd509270b4dbe894a850db529dcc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:00 [async_llm.py:261] Added request cmpl-f35fd509270b4dbe894a850db529dcc4-0.
INFO 03-01 23:54:01 [logger.py:42] Received request cmpl-4155a0e9092e4b759c49408df3787a52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:01 [async_llm.py:261] Added request cmpl-4155a0e9092e4b759c49408df3787a52-0.
INFO 03-01 23:54:02 [logger.py:42] Received request cmpl-9302955e59dc4966857c0654f5f1698e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:02 [async_llm.py:261] Added request cmpl-9302955e59dc4966857c0654f5f1698e-0.
INFO 03-01 23:54:03 [logger.py:42] Received request cmpl-1fa227befb41426bad0f51510042556f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:03 [async_llm.py:261] Added request cmpl-1fa227befb41426bad0f51510042556f-0.
INFO 03-01 23:54:04 [logger.py:42] Received request cmpl-aecedd50912844d698fb72cff2d84d2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:04 [async_llm.py:261] Added request cmpl-aecedd50912844d698fb72cff2d84d2e-0.
INFO 03-01 23:54:05 [logger.py:42] Received request cmpl-b9f1654e8ecd4c52be735519047c359c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:05 [async_llm.py:261] Added request cmpl-b9f1654e8ecd4c52be735519047c359c-0.
INFO 03-01 23:54:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:06 [logger.py:42] Received request cmpl-1abc0a24fad74e1ab29b7192d48e342d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:06 [async_llm.py:261] Added request cmpl-1abc0a24fad74e1ab29b7192d48e342d-0.
INFO 03-01 23:54:07 [logger.py:42] Received request cmpl-0960a9faca114c609186a8af2e65e292-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:07 [async_llm.py:261] Added request cmpl-0960a9faca114c609186a8af2e65e292-0.
INFO 03-01 23:54:08 [logger.py:42] Received request cmpl-eebc454ddb734fae8a6489f5df21794a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:08 [async_llm.py:261] Added request cmpl-eebc454ddb734fae8a6489f5df21794a-0.
INFO 03-01 23:54:09 [logger.py:42] Received request cmpl-7f67151d89b34b2bb1e785eec009df98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:09 [async_llm.py:261] Added request cmpl-7f67151d89b34b2bb1e785eec009df98-0.
INFO 03-01 23:54:10 [logger.py:42] Received request cmpl-29175c21e1e84454af46ee83b8c2b645-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:10 [async_llm.py:261] Added request cmpl-29175c21e1e84454af46ee83b8c2b645-0.
INFO 03-01 23:54:12 [logger.py:42] Received request cmpl-590af4977e444691b904346abee6690d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:12 [async_llm.py:261] Added request cmpl-590af4977e444691b904346abee6690d-0.
INFO 03-01 23:54:13 [logger.py:42] Received request cmpl-4bbce011a7da450abfb9f0ab835b89fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:13 [async_llm.py:261] Added request cmpl-4bbce011a7da450abfb9f0ab835b89fb-0.
INFO 03-01 23:54:14 [logger.py:42] Received request cmpl-6d5e56c4bf844dcaa5a3c9c5b43e62cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:14 [async_llm.py:261] Added request cmpl-6d5e56c4bf844dcaa5a3c9c5b43e62cc-0.
INFO 03-01 23:54:15 [logger.py:42] Received request cmpl-8955cf5454b84978b999456d4842aa0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:15 [async_llm.py:261] Added request cmpl-8955cf5454b84978b999456d4842aa0c-0.
INFO 03-01 23:54:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:16 [logger.py:42] Received request cmpl-9c05f799533b41efaf2757e6d8898854-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:16 [async_llm.py:261] Added request cmpl-9c05f799533b41efaf2757e6d8898854-0.
INFO 03-01 23:54:17 [logger.py:42] Received request cmpl-1f1f7197c8a2426b8caa4558d04418a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:17 [async_llm.py:261] Added request cmpl-1f1f7197c8a2426b8caa4558d04418a1-0.
INFO 03-01 23:54:18 [logger.py:42] Received request cmpl-34b41810c2a540539d5c893de219f8b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:18 [async_llm.py:261] Added request cmpl-34b41810c2a540539d5c893de219f8b7-0.
INFO 03-01 23:54:19 [logger.py:42] Received request cmpl-854c970c80454b0c8773d045b4a324e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:19 [async_llm.py:261] Added request cmpl-854c970c80454b0c8773d045b4a324e0-0.
INFO 03-01 23:54:20 [logger.py:42] Received request cmpl-429c19868b8a442ab02d6ef6a3e400c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:20 [async_llm.py:261] Added request cmpl-429c19868b8a442ab02d6ef6a3e400c9-0.
INFO 03-01 23:54:21 [logger.py:42] Received request cmpl-5b2ff178ec884d7fad4db6c28770fd3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:21 [async_llm.py:261] Added request cmpl-5b2ff178ec884d7fad4db6c28770fd3e-0.
INFO 03-01 23:54:22 [logger.py:42] Received request cmpl-0c9595195adb4a1d95e5163ceafbe91e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:22 [async_llm.py:261] Added request cmpl-0c9595195adb4a1d95e5163ceafbe91e-0.
INFO 03-01 23:54:24 [logger.py:42] Received request cmpl-21170452c17e45d1b6429a60b918e18f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:24 [async_llm.py:261] Added request cmpl-21170452c17e45d1b6429a60b918e18f-0.
INFO 03-01 23:54:25 [logger.py:42] Received request cmpl-b37f99893aab420d996719dec71a43a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:25 [async_llm.py:261] Added request cmpl-b37f99893aab420d996719dec71a43a8-0.
INFO 03-01 23:54:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:26 [logger.py:42] Received request cmpl-baa69391f6ea4ac1b51f27686b541254-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:26 [async_llm.py:261] Added request cmpl-baa69391f6ea4ac1b51f27686b541254-0.
INFO 03-01 23:54:27 [logger.py:42] Received request cmpl-1457a39cf5e8436e9a3326e43fc02b08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:27 [async_llm.py:261] Added request cmpl-1457a39cf5e8436e9a3326e43fc02b08-0.
INFO 03-01 23:54:28 [logger.py:42] Received request cmpl-78197aa8b42847bba92cc21ffeec2aa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:28 [async_llm.py:261] Added request cmpl-78197aa8b42847bba92cc21ffeec2aa3-0.
INFO 03-01 23:54:29 [logger.py:42] Received request cmpl-7fa24263c4124f718f5083812c8cfcf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:29 [async_llm.py:261] Added request cmpl-7fa24263c4124f718f5083812c8cfcf0-0.
INFO 03-01 23:54:30 [logger.py:42] Received request cmpl-579135168c284c78bc3e4a8df00713a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:30 [async_llm.py:261] Added request cmpl-579135168c284c78bc3e4a8df00713a9-0.
INFO 03-01 23:54:31 [logger.py:42] Received request cmpl-506effbee38c48f790558f0224202561-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:31 [async_llm.py:261] Added request cmpl-506effbee38c48f790558f0224202561-0.
INFO 03-01 23:54:32 [logger.py:42] Received request cmpl-2f4608d2723b4dd8a5366e450aa040e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:32 [async_llm.py:261] Added request cmpl-2f4608d2723b4dd8a5366e450aa040e8-0.
INFO 03-01 23:54:33 [logger.py:42] Received request cmpl-d15ecce0775e4c04906f729e39e00468-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:33 [async_llm.py:261] Added request cmpl-d15ecce0775e4c04906f729e39e00468-0.
INFO 03-01 23:54:35 [logger.py:42] Received request cmpl-c77e0e958eba4a39804c2a00a1dfc6a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:35 [async_llm.py:261] Added request cmpl-c77e0e958eba4a39804c2a00a1dfc6a1-0.
INFO 03-01 23:54:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:36 [logger.py:42] Received request cmpl-4443cc3ea063423abbe36285b8701622-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:36 [async_llm.py:261] Added request cmpl-4443cc3ea063423abbe36285b8701622-0.
INFO 03-01 23:54:37 [logger.py:42] Received request cmpl-4039f8535d124176b29132a72f5ae3e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:37 [async_llm.py:261] Added request cmpl-4039f8535d124176b29132a72f5ae3e9-0.
INFO 03-01 23:54:38 [logger.py:42] Received request cmpl-69e7ec65b4fd4fc6902be235676c0715-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:38 [async_llm.py:261] Added request cmpl-69e7ec65b4fd4fc6902be235676c0715-0.
INFO 03-01 23:54:39 [logger.py:42] Received request cmpl-a492d2da62344b5d8ff3dfb2c542a380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:39 [async_llm.py:261] Added request cmpl-a492d2da62344b5d8ff3dfb2c542a380-0.
INFO 03-01 23:54:40 [logger.py:42] Received request cmpl-151b7aff880a4dd29c3dfd2d715cbc03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:40 [async_llm.py:261] Added request cmpl-151b7aff880a4dd29c3dfd2d715cbc03-0.
INFO 03-01 23:54:41 [logger.py:42] Received request cmpl-6b95592601194faab168f3952df5dc57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:41 [async_llm.py:261] Added request cmpl-6b95592601194faab168f3952df5dc57-0.
INFO 03-01 23:54:42 [logger.py:42] Received request cmpl-70061f6b61f74c86993395239951a0a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:42 [async_llm.py:261] Added request cmpl-70061f6b61f74c86993395239951a0a4-0.
INFO 03-01 23:54:43 [logger.py:42] Received request cmpl-b668d2073f584c349cd787b242364654-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:43 [async_llm.py:261] Added request cmpl-b668d2073f584c349cd787b242364654-0.
INFO 03-01 23:54:44 [logger.py:42] Received request cmpl-d6bb449a13ca4a3292f3eb088d5a8a05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:44 [async_llm.py:261] Added request cmpl-d6bb449a13ca4a3292f3eb088d5a8a05-0.
INFO 03-01 23:54:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:45 [logger.py:42] Received request cmpl-bcc5e6b7aae443f2a8f3b792f9a0f60f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:45 [async_llm.py:261] Added request cmpl-bcc5e6b7aae443f2a8f3b792f9a0f60f-0.
INFO 03-01 23:54:47 [logger.py:42] Received request cmpl-9380e570d58b4a1592ca857e167770e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:47 [async_llm.py:261] Added request cmpl-9380e570d58b4a1592ca857e167770e2-0.
INFO 03-01 23:54:48 [logger.py:42] Received request cmpl-c2456ef8d6bb4cb193037af8b24e0c94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:48 [async_llm.py:261] Added request cmpl-c2456ef8d6bb4cb193037af8b24e0c94-0.
INFO 03-01 23:54:49 [logger.py:42] Received request cmpl-53f220ab9e814da5af6d35b2bb908f77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:49 [async_llm.py:261] Added request cmpl-53f220ab9e814da5af6d35b2bb908f77-0.
INFO 03-01 23:54:50 [logger.py:42] Received request cmpl-82bf55a0773c4303accb5ba6b3d77beb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:50 [async_llm.py:261] Added request cmpl-82bf55a0773c4303accb5ba6b3d77beb-0.
INFO 03-01 23:54:51 [logger.py:42] Received request cmpl-5c73692eb6db4063a92f4f5ade934d7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:51 [async_llm.py:261] Added request cmpl-5c73692eb6db4063a92f4f5ade934d7b-0.
INFO 03-01 23:54:52 [logger.py:42] Received request cmpl-2cbb0a87672c4b6191f41600dd72640c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:52 [async_llm.py:261] Added request cmpl-2cbb0a87672c4b6191f41600dd72640c-0.
INFO 03-01 23:54:53 [logger.py:42] Received request cmpl-2732073a0f2147f19db55c65431beb32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:53 [async_llm.py:261] Added request cmpl-2732073a0f2147f19db55c65431beb32-0.
INFO 03-01 23:54:54 [logger.py:42] Received request cmpl-ae8a83110f8c4f34995272d501baa2fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:54 [async_llm.py:261] Added request cmpl-ae8a83110f8c4f34995272d501baa2fe-0.
INFO 03-01 23:54:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:54:55 [logger.py:42] Received request cmpl-cacbd9c73dba401484f5214722fefbbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:55 [async_llm.py:261] Added request cmpl-cacbd9c73dba401484f5214722fefbbb-0.
INFO 03-01 23:54:56 [logger.py:42] Received request cmpl-95ebfb26afd245dda9ce79594b1f5adb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:56 [async_llm.py:261] Added request cmpl-95ebfb26afd245dda9ce79594b1f5adb-0.
INFO 03-01 23:54:57 [logger.py:42] Received request cmpl-4180740f52a94e98a5d0649e9a874a2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:57 [async_llm.py:261] Added request cmpl-4180740f52a94e98a5d0649e9a874a2b-0.
INFO 03-01 23:54:59 [logger.py:42] Received request cmpl-62bc814220a649ed9e992a0d86e89abb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:54:59 [async_llm.py:261] Added request cmpl-62bc814220a649ed9e992a0d86e89abb-0.
INFO 03-01 23:55:00 [logger.py:42] Received request cmpl-08e581da01d34a02880b6ca5bf76285e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:00 [async_llm.py:261] Added request cmpl-08e581da01d34a02880b6ca5bf76285e-0.
INFO 03-01 23:55:01 [logger.py:42] Received request cmpl-5491fb3f73654984ab47950796cb2b79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:01 [async_llm.py:261] Added request cmpl-5491fb3f73654984ab47950796cb2b79-0.
INFO 03-01 23:55:02 [logger.py:42] Received request cmpl-3e68b6e1e72844baa808cf7a30676541-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:02 [async_llm.py:261] Added request cmpl-3e68b6e1e72844baa808cf7a30676541-0.
INFO 03-01 23:55:03 [logger.py:42] Received request cmpl-338bacd076674213af98ce4ec3115e2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:03 [async_llm.py:261] Added request cmpl-338bacd076674213af98ce4ec3115e2e-0.
INFO 03-01 23:55:04 [logger.py:42] Received request cmpl-1ccd4ebdff3547168184d08aa2ab8887-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:04 [async_llm.py:261] Added request cmpl-1ccd4ebdff3547168184d08aa2ab8887-0.
INFO 03-01 23:55:05 [logger.py:42] Received request cmpl-85c222d2612748d38f4097099facb291-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:05 [async_llm.py:261] Added request cmpl-85c222d2612748d38f4097099facb291-0.
INFO 03-01 23:55:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:06 [logger.py:42] Received request cmpl-4d9c657cd2e942bdae7c4a97f8d6d1a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:06 [async_llm.py:261] Added request cmpl-4d9c657cd2e942bdae7c4a97f8d6d1a4-0.
INFO 03-01 23:55:07 [logger.py:42] Received request cmpl-ce4235bcce5f41bab45bbe80024be6ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:07 [async_llm.py:261] Added request cmpl-ce4235bcce5f41bab45bbe80024be6ea-0.
INFO 03-01 23:55:08 [logger.py:42] Received request cmpl-c833459d70124fa68920fd84c1518236-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:08 [async_llm.py:261] Added request cmpl-c833459d70124fa68920fd84c1518236-0.
INFO 03-01 23:55:10 [logger.py:42] Received request cmpl-5eca658aebe04be19cf93a64493e55db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:10 [async_llm.py:261] Added request cmpl-5eca658aebe04be19cf93a64493e55db-0.
INFO 03-01 23:55:11 [logger.py:42] Received request cmpl-04b8346d5f2f4fe1829009dcfc9c9040-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:11 [async_llm.py:261] Added request cmpl-04b8346d5f2f4fe1829009dcfc9c9040-0.
INFO 03-01 23:55:12 [logger.py:42] Received request cmpl-328112e67b0a4af9bcc23e9f4e1c1b09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:12 [async_llm.py:261] Added request cmpl-328112e67b0a4af9bcc23e9f4e1c1b09-0.
INFO 03-01 23:55:13 [logger.py:42] Received request cmpl-a258553369384d2ca8dca4d9cd9e5eb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:13 [async_llm.py:261] Added request cmpl-a258553369384d2ca8dca4d9cd9e5eb3-0.
INFO 03-01 23:55:14 [logger.py:42] Received request cmpl-7f05aaf9663d409c826fc8f2346e90c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:14 [async_llm.py:261] Added request cmpl-7f05aaf9663d409c826fc8f2346e90c6-0.
INFO 03-01 23:55:15 [logger.py:42] Received request cmpl-28676fc526f64d1485798082050026b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:15 [async_llm.py:261] Added request cmpl-28676fc526f64d1485798082050026b9-0.
INFO 03-01 23:55:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:16 [logger.py:42] Received request cmpl-790e1d459bab463cbdad5d1459d6d71f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:16 [async_llm.py:261] Added request cmpl-790e1d459bab463cbdad5d1459d6d71f-0.
INFO 03-01 23:55:17 [logger.py:42] Received request cmpl-f72b660bd3c74ef09120931c0822d3c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:17 [async_llm.py:261] Added request cmpl-f72b660bd3c74ef09120931c0822d3c2-0.
INFO 03-01 23:55:18 [logger.py:42] Received request cmpl-a43f09319b59460e8f1cce6930b62b0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:18 [async_llm.py:261] Added request cmpl-a43f09319b59460e8f1cce6930b62b0b-0.
INFO 03-01 23:55:19 [logger.py:42] Received request cmpl-46a0260965464835a3405c84ccabb64e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:19 [async_llm.py:261] Added request cmpl-46a0260965464835a3405c84ccabb64e-0.
INFO 03-01 23:55:20 [logger.py:42] Received request cmpl-b1c29eb19acb4872ac7a381a74dd8356-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:20 [async_llm.py:261] Added request cmpl-b1c29eb19acb4872ac7a381a74dd8356-0.
INFO 03-01 23:55:22 [logger.py:42] Received request cmpl-4f2c386a6c044c2b983add904e572c0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:22 [async_llm.py:261] Added request cmpl-4f2c386a6c044c2b983add904e572c0f-0.
INFO 03-01 23:55:23 [logger.py:42] Received request cmpl-f1b7e3c9be4a4c8d9568436e8295d1f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:23 [async_llm.py:261] Added request cmpl-f1b7e3c9be4a4c8d9568436e8295d1f0-0.
INFO 03-01 23:55:24 [logger.py:42] Received request cmpl-14071057642d4a1280f41fdba28dec9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:24 [async_llm.py:261] Added request cmpl-14071057642d4a1280f41fdba28dec9e-0.
INFO 03-01 23:55:25 [logger.py:42] Received request cmpl-a0d447d2709c4b9f9508d54b28742f54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:25 [async_llm.py:261] Added request cmpl-a0d447d2709c4b9f9508d54b28742f54-0.
INFO 03-01 23:55:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:26 [logger.py:42] Received request cmpl-59c67a47b94c4562a6e50babd166380e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:26 [async_llm.py:261] Added request cmpl-59c67a47b94c4562a6e50babd166380e-0.
INFO 03-01 23:55:27 [logger.py:42] Received request cmpl-3e0a7250c2ce448a9322c38d5a0f88c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:27 [async_llm.py:261] Added request cmpl-3e0a7250c2ce448a9322c38d5a0f88c1-0.
INFO 03-01 23:55:28 [logger.py:42] Received request cmpl-a97ed6f0a2214a978ba116afd891d038-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:28 [async_llm.py:261] Added request cmpl-a97ed6f0a2214a978ba116afd891d038-0.
INFO 03-01 23:55:29 [logger.py:42] Received request cmpl-94a8578294d04cbbb0649fdf8e37f644-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:29 [async_llm.py:261] Added request cmpl-94a8578294d04cbbb0649fdf8e37f644-0.
INFO 03-01 23:55:30 [logger.py:42] Received request cmpl-6813a7273cbd4bb0a220f1f8a6a048ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:30 [async_llm.py:261] Added request cmpl-6813a7273cbd4bb0a220f1f8a6a048ae-0.
INFO 03-01 23:55:31 [logger.py:42] Received request cmpl-034a8ba093f04de484aedeec539cb7d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:31 [async_llm.py:261] Added request cmpl-034a8ba093f04de484aedeec539cb7d0-0.
INFO 03-01 23:55:33 [logger.py:42] Received request cmpl-8997d968e417477bab61cac5f9aafe41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:33 [async_llm.py:261] Added request cmpl-8997d968e417477bab61cac5f9aafe41-0.
INFO 03-01 23:55:34 [logger.py:42] Received request cmpl-56059c14c63345d398bb5cb409d91c3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:34 [async_llm.py:261] Added request cmpl-56059c14c63345d398bb5cb409d91c3a-0.
INFO 03-01 23:55:35 [logger.py:42] Received request cmpl-52597e7c221742378885b5dc5e9e5fed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:35 [async_llm.py:261] Added request cmpl-52597e7c221742378885b5dc5e9e5fed-0.
INFO 03-01 23:55:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:36 [logger.py:42] Received request cmpl-c06dc44ec772478cb11559671c5d78b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:36 [async_llm.py:261] Added request cmpl-c06dc44ec772478cb11559671c5d78b3-0.
INFO 03-01 23:55:37 [logger.py:42] Received request cmpl-9535edd1a3864661a298325f5a2664d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:37 [async_llm.py:261] Added request cmpl-9535edd1a3864661a298325f5a2664d5-0.
INFO 03-01 23:55:38 [logger.py:42] Received request cmpl-48fa2880b4d54e199e82271cc950c15f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:38 [async_llm.py:261] Added request cmpl-48fa2880b4d54e199e82271cc950c15f-0.
INFO 03-01 23:55:39 [logger.py:42] Received request cmpl-1252cc3ff8014a559e68d0a029c463fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:39 [async_llm.py:261] Added request cmpl-1252cc3ff8014a559e68d0a029c463fd-0.
INFO 03-01 23:55:40 [logger.py:42] Received request cmpl-f8cc5ece1c4b4e1db1533eba6bb4016e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:40 [async_llm.py:261] Added request cmpl-f8cc5ece1c4b4e1db1533eba6bb4016e-0.
INFO 03-01 23:55:41 [logger.py:42] Received request cmpl-7b86322c2fa64b5eaccc537728bab283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:41 [async_llm.py:261] Added request cmpl-7b86322c2fa64b5eaccc537728bab283-0.
INFO 03-01 23:55:42 [logger.py:42] Received request cmpl-bdd5a4734c664005890bd82ab28c78ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:42 [async_llm.py:261] Added request cmpl-bdd5a4734c664005890bd82ab28c78ce-0.
INFO 03-01 23:55:43 [logger.py:42] Received request cmpl-26edfed420864aa0907d21e954866e14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:43 [async_llm.py:261] Added request cmpl-26edfed420864aa0907d21e954866e14-0.
INFO 03-01 23:55:45 [logger.py:42] Received request cmpl-8bcf278d9ebf48ec9506607a54050de5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:45 [async_llm.py:261] Added request cmpl-8bcf278d9ebf48ec9506607a54050de5-0.
INFO 03-01 23:55:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:46 [logger.py:42] Received request cmpl-74b2115df15e41b6ad0f0c4f5052fe92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:46 [async_llm.py:261] Added request cmpl-74b2115df15e41b6ad0f0c4f5052fe92-0.
INFO 03-01 23:55:47 [logger.py:42] Received request cmpl-383d32bbe1594d96a15eec4e2989cbca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:47 [async_llm.py:261] Added request cmpl-383d32bbe1594d96a15eec4e2989cbca-0.
INFO 03-01 23:55:48 [logger.py:42] Received request cmpl-29b7a83e42c44eaaaf07ca50153be672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:48 [async_llm.py:261] Added request cmpl-29b7a83e42c44eaaaf07ca50153be672-0.
INFO 03-01 23:55:49 [logger.py:42] Received request cmpl-b70827c4f72641a6ba2a240418b71bc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:49 [async_llm.py:261] Added request cmpl-b70827c4f72641a6ba2a240418b71bc1-0.
INFO 03-01 23:55:50 [logger.py:42] Received request cmpl-7d5994628ae142168c1882c6ac0d15b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:50 [async_llm.py:261] Added request cmpl-7d5994628ae142168c1882c6ac0d15b2-0.
INFO 03-01 23:55:51 [logger.py:42] Received request cmpl-10115aed01f44169b049fe20fd6257da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:51 [async_llm.py:261] Added request cmpl-10115aed01f44169b049fe20fd6257da-0.
INFO 03-01 23:55:52 [logger.py:42] Received request cmpl-ad0377cdb53547aea43e130ab4a22913-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:52 [async_llm.py:261] Added request cmpl-ad0377cdb53547aea43e130ab4a22913-0.
INFO 03-01 23:55:53 [logger.py:42] Received request cmpl-e6951da5d9b14216b2a51edb7a789d8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:53 [async_llm.py:261] Added request cmpl-e6951da5d9b14216b2a51edb7a789d8c-0.
INFO 03-01 23:55:54 [logger.py:42] Received request cmpl-4103f52c7a9540bdab3c9b5712c85c76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:54 [async_llm.py:261] Added request cmpl-4103f52c7a9540bdab3c9b5712c85c76-0.
INFO 03-01 23:55:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:55:55 [logger.py:42] Received request cmpl-1fc4160e3a9c46e8a6698d96e930bfc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:55 [async_llm.py:261] Added request cmpl-1fc4160e3a9c46e8a6698d96e930bfc1-0.
INFO 03-01 23:55:57 [logger.py:42] Received request cmpl-3e403205ed0f457f85f491ea5bfd1af3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:57 [async_llm.py:261] Added request cmpl-3e403205ed0f457f85f491ea5bfd1af3-0.
INFO 03-01 23:55:58 [logger.py:42] Received request cmpl-48653a463d0d4f9cad42783c5bf2c0d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:58 [async_llm.py:261] Added request cmpl-48653a463d0d4f9cad42783c5bf2c0d9-0.
INFO 03-01 23:55:59 [logger.py:42] Received request cmpl-6c99dcf4991944548a33894c057304ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:55:59 [async_llm.py:261] Added request cmpl-6c99dcf4991944548a33894c057304ae-0.
INFO 03-01 23:56:00 [logger.py:42] Received request cmpl-36b15686f771471abf65d5cc0c605a52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:00 [async_llm.py:261] Added request cmpl-36b15686f771471abf65d5cc0c605a52-0.
INFO 03-01 23:56:01 [logger.py:42] Received request cmpl-c8c8591bb7374e6ea670a09d086dae15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:01 [async_llm.py:261] Added request cmpl-c8c8591bb7374e6ea670a09d086dae15-0.
INFO 03-01 23:56:02 [logger.py:42] Received request cmpl-69438e0815c44348a059f99ea733ef4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:02 [async_llm.py:261] Added request cmpl-69438e0815c44348a059f99ea733ef4f-0.
INFO 03-01 23:56:03 [logger.py:42] Received request cmpl-8899414474e348c5af9d0b4e7223ae7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:03 [async_llm.py:261] Added request cmpl-8899414474e348c5af9d0b4e7223ae7f-0.
INFO 03-01 23:56:04 [logger.py:42] Received request cmpl-bf7ea412b6e34a4096488a6f351723a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:04 [async_llm.py:261] Added request cmpl-bf7ea412b6e34a4096488a6f351723a6-0.
INFO 03-01 23:56:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:05 [logger.py:42] Received request cmpl-b4a87df50ffd4b6b880d1f1157e4ddd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:05 [async_llm.py:261] Added request cmpl-b4a87df50ffd4b6b880d1f1157e4ddd5-0.
INFO 03-01 23:56:06 [logger.py:42] Received request cmpl-c725d753039141fbbfde54e5fe78f1ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:06 [async_llm.py:261] Added request cmpl-c725d753039141fbbfde54e5fe78f1ea-0.
INFO 03-01 23:56:08 [logger.py:42] Received request cmpl-784f1f4a5a3148b5b6b35e3ed8091f82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:08 [async_llm.py:261] Added request cmpl-784f1f4a5a3148b5b6b35e3ed8091f82-0.
INFO 03-01 23:56:09 [logger.py:42] Received request cmpl-bd01f73fbdf845408fbe4002486d93ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:09 [async_llm.py:261] Added request cmpl-bd01f73fbdf845408fbe4002486d93ca-0.
INFO 03-01 23:56:10 [logger.py:42] Received request cmpl-73b085cea65647fe83b15c8a3b008e96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:10 [async_llm.py:261] Added request cmpl-73b085cea65647fe83b15c8a3b008e96-0.
INFO 03-01 23:56:11 [logger.py:42] Received request cmpl-75bdc114b35744d4b095c77a24f418b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:11 [async_llm.py:261] Added request cmpl-75bdc114b35744d4b095c77a24f418b4-0.
INFO 03-01 23:56:12 [logger.py:42] Received request cmpl-ec35933e57614aa192241f4b1fe01e59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:12 [async_llm.py:261] Added request cmpl-ec35933e57614aa192241f4b1fe01e59-0.
INFO 03-01 23:56:13 [logger.py:42] Received request cmpl-2c71b1ad4524453885d2a0c8061e7c1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:13 [async_llm.py:261] Added request cmpl-2c71b1ad4524453885d2a0c8061e7c1a-0.
INFO 03-01 23:56:14 [logger.py:42] Received request cmpl-3696d790d82e4023ad20023a73326e11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:14 [async_llm.py:261] Added request cmpl-3696d790d82e4023ad20023a73326e11-0.
INFO 03-01 23:56:15 [logger.py:42] Received request cmpl-2413f7e787864447b1eb2768208a05d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:15 [async_llm.py:261] Added request cmpl-2413f7e787864447b1eb2768208a05d2-0.
INFO 03-01 23:56:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:16 [logger.py:42] Received request cmpl-4f7c991b731040e0adbdc6e8c2c703a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:16 [async_llm.py:261] Added request cmpl-4f7c991b731040e0adbdc6e8c2c703a8-0.
INFO 03-01 23:56:17 [logger.py:42] Received request cmpl-48db6294115b4caab8bbd6711aaf85e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:17 [async_llm.py:261] Added request cmpl-48db6294115b4caab8bbd6711aaf85e2-0.
INFO 03-01 23:56:18 [logger.py:42] Received request cmpl-516d0d061c3e4cec9d8d695c4551bfee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:18 [async_llm.py:261] Added request cmpl-516d0d061c3e4cec9d8d695c4551bfee-0.
INFO 03-01 23:56:20 [logger.py:42] Received request cmpl-81c0cda441f6486a8c040e5bfdc921a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:20 [async_llm.py:261] Added request cmpl-81c0cda441f6486a8c040e5bfdc921a8-0.
INFO 03-01 23:56:21 [logger.py:42] Received request cmpl-fd14149878c949e1b56a72cfff0a0d7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:21 [async_llm.py:261] Added request cmpl-fd14149878c949e1b56a72cfff0a0d7b-0.
INFO 03-01 23:56:22 [logger.py:42] Received request cmpl-0203bee98ae34832b4ebce77d58152a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:22 [async_llm.py:261] Added request cmpl-0203bee98ae34832b4ebce77d58152a5-0.
INFO 03-01 23:56:23 [logger.py:42] Received request cmpl-a82f504c7bf54dd8ac286407bb7c4bf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:23 [async_llm.py:261] Added request cmpl-a82f504c7bf54dd8ac286407bb7c4bf1-0.
INFO 03-01 23:56:24 [logger.py:42] Received request cmpl-7339ba4c037d4cc1b2a621d075ae1423-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:24 [async_llm.py:261] Added request cmpl-7339ba4c037d4cc1b2a621d075ae1423-0.
INFO 03-01 23:56:25 [logger.py:42] Received request cmpl-c9e59f6ddb6341dbbca0d3e5cee454f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:25 [async_llm.py:261] Added request cmpl-c9e59f6ddb6341dbbca0d3e5cee454f0-0.
INFO 03-01 23:56:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:26 [logger.py:42] Received request cmpl-0da295a05cb64327b06f7d6f503add1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:26 [async_llm.py:261] Added request cmpl-0da295a05cb64327b06f7d6f503add1b-0.
INFO 03-01 23:56:27 [logger.py:42] Received request cmpl-464f00ea635347ce96f85297409a96b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:27 [async_llm.py:261] Added request cmpl-464f00ea635347ce96f85297409a96b5-0.
INFO 03-01 23:56:28 [logger.py:42] Received request cmpl-ea74095491874364b9067c9053b8231e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:28 [async_llm.py:261] Added request cmpl-ea74095491874364b9067c9053b8231e-0.
INFO 03-01 23:56:29 [logger.py:42] Received request cmpl-9ccf499fcc31449997722497d01d9636-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:29 [async_llm.py:261] Added request cmpl-9ccf499fcc31449997722497d01d9636-0.
INFO 03-01 23:56:31 [logger.py:42] Received request cmpl-6b458160eef64da0bbba056bcba6020b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:31 [async_llm.py:261] Added request cmpl-6b458160eef64da0bbba056bcba6020b-0.
INFO 03-01 23:56:32 [logger.py:42] Received request cmpl-c90ddadb26a04b40aeb0b8142a90a974-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:32 [async_llm.py:261] Added request cmpl-c90ddadb26a04b40aeb0b8142a90a974-0.
INFO 03-01 23:56:33 [logger.py:42] Received request cmpl-4b029eb55157495b9c7b117779e1fc0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:33 [async_llm.py:261] Added request cmpl-4b029eb55157495b9c7b117779e1fc0c-0.
INFO 03-01 23:56:34 [logger.py:42] Received request cmpl-7ae313bdd818466a9782ec2e11264659-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:34 [async_llm.py:261] Added request cmpl-7ae313bdd818466a9782ec2e11264659-0.
INFO 03-01 23:56:35 [logger.py:42] Received request cmpl-aefcf77b78964883a74d6fec4f72c504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:35 [async_llm.py:261] Added request cmpl-aefcf77b78964883a74d6fec4f72c504-0.
INFO 03-01 23:56:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:36 [logger.py:42] Received request cmpl-dffea4782a3948cb85ee5b5b78fb7e75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:36 [async_llm.py:261] Added request cmpl-dffea4782a3948cb85ee5b5b78fb7e75-0.
INFO 03-01 23:56:37 [logger.py:42] Received request cmpl-f667666f9b194ac0bdbe353fc43ff633-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:37 [async_llm.py:261] Added request cmpl-f667666f9b194ac0bdbe353fc43ff633-0.
INFO 03-01 23:56:38 [logger.py:42] Received request cmpl-01b93227fbaf43c8a31ceed2c09d1fd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:38 [async_llm.py:261] Added request cmpl-01b93227fbaf43c8a31ceed2c09d1fd9-0.
INFO 03-01 23:56:39 [logger.py:42] Received request cmpl-0b499798b9df49d58edeff5d19c3670b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:39 [async_llm.py:261] Added request cmpl-0b499798b9df49d58edeff5d19c3670b-0.
INFO 03-01 23:56:40 [logger.py:42] Received request cmpl-9bde6839614d434e82629c9721a630cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:40 [async_llm.py:261] Added request cmpl-9bde6839614d434e82629c9721a630cb-0.
INFO 03-01 23:56:41 [logger.py:42] Received request cmpl-c04fccd309964d57a71fc1a79558b91e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:41 [async_llm.py:261] Added request cmpl-c04fccd309964d57a71fc1a79558b91e-0.
INFO 03-01 23:56:43 [logger.py:42] Received request cmpl-5f764314760e4fb4b93b710a52c0ba18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:43 [async_llm.py:261] Added request cmpl-5f764314760e4fb4b93b710a52c0ba18-0.
INFO 03-01 23:56:44 [logger.py:42] Received request cmpl-a0f952d46d654873b3414e00b872e880-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:44 [async_llm.py:261] Added request cmpl-a0f952d46d654873b3414e00b872e880-0.
INFO 03-01 23:56:45 [logger.py:42] Received request cmpl-305226eba2b042d980d4c1f49d0d8493-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:45 [async_llm.py:261] Added request cmpl-305226eba2b042d980d4c1f49d0d8493-0.
INFO 03-01 23:56:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:46 [logger.py:42] Received request cmpl-8774873505674a3f9ca8057bbbdeb663-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:46 [async_llm.py:261] Added request cmpl-8774873505674a3f9ca8057bbbdeb663-0.
INFO 03-01 23:56:47 [logger.py:42] Received request cmpl-e7fbe8751975434db50560534a5c276f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:47 [async_llm.py:261] Added request cmpl-e7fbe8751975434db50560534a5c276f-0.
INFO 03-01 23:56:48 [logger.py:42] Received request cmpl-2f8ed1d4c4034915b63a53a23f08ba3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:48 [async_llm.py:261] Added request cmpl-2f8ed1d4c4034915b63a53a23f08ba3a-0.
INFO 03-01 23:56:49 [logger.py:42] Received request cmpl-0ab41cce051849839d2bab9f95c1a8d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:49 [async_llm.py:261] Added request cmpl-0ab41cce051849839d2bab9f95c1a8d3-0.
INFO 03-01 23:56:50 [logger.py:42] Received request cmpl-7068e4b13b36485586aca7d6f874140a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:50 [async_llm.py:261] Added request cmpl-7068e4b13b36485586aca7d6f874140a-0.
INFO 03-01 23:56:51 [logger.py:42] Received request cmpl-c866f9c88b8b4015ae630a5616e0061b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:51 [async_llm.py:261] Added request cmpl-c866f9c88b8b4015ae630a5616e0061b-0.
INFO 03-01 23:56:52 [logger.py:42] Received request cmpl-6ea70195774c48c98c374593d50dfe01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:52 [async_llm.py:261] Added request cmpl-6ea70195774c48c98c374593d50dfe01-0.
INFO 03-01 23:56:53 [logger.py:42] Received request cmpl-2ddc8fe4155147d687ed189686f1dc11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:53 [async_llm.py:261] Added request cmpl-2ddc8fe4155147d687ed189686f1dc11-0.
INFO 03-01 23:56:55 [logger.py:42] Received request cmpl-2677ebf3ce5b435682a6a9a2ca3c755c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:55 [async_llm.py:261] Added request cmpl-2677ebf3ce5b435682a6a9a2ca3c755c-0.
INFO 03-01 23:56:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:56:56 [logger.py:42] Received request cmpl-a7e9e2faef794f4aa36ef5f31212d0d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:56 [async_llm.py:261] Added request cmpl-a7e9e2faef794f4aa36ef5f31212d0d7-0.
INFO 03-01 23:56:57 [logger.py:42] Received request cmpl-a543578c6c1748fab51e6857b30de999-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:57 [async_llm.py:261] Added request cmpl-a543578c6c1748fab51e6857b30de999-0.
INFO 03-01 23:56:58 [logger.py:42] Received request cmpl-55144ee0bf6a41db8134d23a6642ce88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:58 [async_llm.py:261] Added request cmpl-55144ee0bf6a41db8134d23a6642ce88-0.
INFO 03-01 23:56:59 [logger.py:42] Received request cmpl-e4a4d359a3de47b99560d2544daf9baa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:56:59 [async_llm.py:261] Added request cmpl-e4a4d359a3de47b99560d2544daf9baa-0.
INFO 03-01 23:57:00 [logger.py:42] Received request cmpl-7f7193cac0ef4133aac0088a95478102-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:00 [async_llm.py:261] Added request cmpl-7f7193cac0ef4133aac0088a95478102-0.
INFO 03-01 23:57:01 [logger.py:42] Received request cmpl-3f93ffe5028c4e0d89b61544816e7518-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:01 [async_llm.py:261] Added request cmpl-3f93ffe5028c4e0d89b61544816e7518-0.
INFO 03-01 23:57:02 [logger.py:42] Received request cmpl-abd9afe9d4d8477bbaf6118dbf742b5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:02 [async_llm.py:261] Added request cmpl-abd9afe9d4d8477bbaf6118dbf742b5b-0.
INFO 03-01 23:57:03 [logger.py:42] Received request cmpl-cf08bcd127e044b9bfbeac5cbd966a94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:03 [async_llm.py:261] Added request cmpl-cf08bcd127e044b9bfbeac5cbd966a94-0.
INFO 03-01 23:57:04 [logger.py:42] Received request cmpl-e4a9a2d0ac574beb90a82ea6da898ed3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:04 [async_llm.py:261] Added request cmpl-e4a9a2d0ac574beb90a82ea6da898ed3-0.
INFO 03-01 23:57:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:06 [logger.py:42] Received request cmpl-bfbc1676d03c406c87e7098d3a8d378b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:06 [async_llm.py:261] Added request cmpl-bfbc1676d03c406c87e7098d3a8d378b-0.
INFO 03-01 23:57:07 [logger.py:42] Received request cmpl-82d2cbfcab3f4811803037e086d40951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:07 [async_llm.py:261] Added request cmpl-82d2cbfcab3f4811803037e086d40951-0.
INFO 03-01 23:57:08 [logger.py:42] Received request cmpl-07331c02de784daba573149f3088b133-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:08 [async_llm.py:261] Added request cmpl-07331c02de784daba573149f3088b133-0.
INFO 03-01 23:57:09 [logger.py:42] Received request cmpl-96bb888f70d0440dab057b4e2bb38cb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:09 [async_llm.py:261] Added request cmpl-96bb888f70d0440dab057b4e2bb38cb9-0.
INFO 03-01 23:57:10 [logger.py:42] Received request cmpl-2094995571044f029ab46078deff2118-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:10 [async_llm.py:261] Added request cmpl-2094995571044f029ab46078deff2118-0.
INFO 03-01 23:57:11 [logger.py:42] Received request cmpl-4498af1c708949f8b6c70df8095b4914-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:11 [async_llm.py:261] Added request cmpl-4498af1c708949f8b6c70df8095b4914-0.
INFO 03-01 23:57:12 [logger.py:42] Received request cmpl-17d7461b6de74969837e08bc0bdf78d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:12 [async_llm.py:261] Added request cmpl-17d7461b6de74969837e08bc0bdf78d1-0.
INFO 03-01 23:57:13 [logger.py:42] Received request cmpl-7aec6b2d47c546f0b97c2b97d273f6a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:13 [async_llm.py:261] Added request cmpl-7aec6b2d47c546f0b97c2b97d273f6a7-0.
INFO 03-01 23:57:14 [logger.py:42] Received request cmpl-71710e21725942e8846466410f6262d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:14 [async_llm.py:261] Added request cmpl-71710e21725942e8846466410f6262d7-0.
INFO 03-01 23:57:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:15 [logger.py:42] Received request cmpl-5cf27a411ff345eeb4d7d2d7775da862-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:15 [async_llm.py:261] Added request cmpl-5cf27a411ff345eeb4d7d2d7775da862-0.
INFO 03-01 23:57:16 [logger.py:42] Received request cmpl-b17692c1824e4c26b597f76011915e90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:16 [async_llm.py:261] Added request cmpl-b17692c1824e4c26b597f76011915e90-0.
INFO 03-01 23:57:18 [logger.py:42] Received request cmpl-82dae514dadf4864855711a51fe98a4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:18 [async_llm.py:261] Added request cmpl-82dae514dadf4864855711a51fe98a4b-0.
INFO 03-01 23:57:19 [logger.py:42] Received request cmpl-d30a6a784c384cc0b1a2802d77a51d52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:19 [async_llm.py:261] Added request cmpl-d30a6a784c384cc0b1a2802d77a51d52-0.
INFO 03-01 23:57:20 [logger.py:42] Received request cmpl-3122d4c97ad44a9584f3ca83f9a9fc83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:20 [async_llm.py:261] Added request cmpl-3122d4c97ad44a9584f3ca83f9a9fc83-0.
INFO 03-01 23:57:21 [logger.py:42] Received request cmpl-7205b2d3b633415a8b1669d7bfc2cb3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:21 [async_llm.py:261] Added request cmpl-7205b2d3b633415a8b1669d7bfc2cb3f-0.
INFO 03-01 23:57:22 [logger.py:42] Received request cmpl-84cae8752af54445bb2401c1a133030a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:22 [async_llm.py:261] Added request cmpl-84cae8752af54445bb2401c1a133030a-0.
INFO 03-01 23:57:23 [logger.py:42] Received request cmpl-da87950a1e6a4eeb841bf1cc517d1745-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:23 [async_llm.py:261] Added request cmpl-da87950a1e6a4eeb841bf1cc517d1745-0.
INFO 03-01 23:57:24 [logger.py:42] Received request cmpl-0cd6fc4b4c72459dbee28682f031d2ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:24 [async_llm.py:261] Added request cmpl-0cd6fc4b4c72459dbee28682f031d2ac-0.
INFO 03-01 23:57:25 [logger.py:42] Received request cmpl-ae7aeb3ab57147c685bf70dece37989f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:25 [async_llm.py:261] Added request cmpl-ae7aeb3ab57147c685bf70dece37989f-0.
INFO 03-01 23:57:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:26 [logger.py:42] Received request cmpl-6c0775ace9b54636ab83a2f4d8e30aef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:26 [async_llm.py:261] Added request cmpl-6c0775ace9b54636ab83a2f4d8e30aef-0.
INFO 03-01 23:57:27 [logger.py:42] Received request cmpl-8836e68375ee49ba8333077fcfb75c70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:27 [async_llm.py:261] Added request cmpl-8836e68375ee49ba8333077fcfb75c70-0.
INFO 03-01 23:57:28 [logger.py:42] Received request cmpl-1d9c24f033e5444f86982f3b11f1c6c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:28 [async_llm.py:261] Added request cmpl-1d9c24f033e5444f86982f3b11f1c6c6-0.
INFO 03-01 23:57:30 [logger.py:42] Received request cmpl-505e691304ae42e3ae1d26c17945c8ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:30 [async_llm.py:261] Added request cmpl-505e691304ae42e3ae1d26c17945c8ce-0.
INFO 03-01 23:57:31 [logger.py:42] Received request cmpl-245b4bc8e53f4b3d8b328d32617688c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:31 [async_llm.py:261] Added request cmpl-245b4bc8e53f4b3d8b328d32617688c3-0.
INFO 03-01 23:57:32 [logger.py:42] Received request cmpl-3bc33a44ccd14c4d9f8feda7b404bef6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:32 [async_llm.py:261] Added request cmpl-3bc33a44ccd14c4d9f8feda7b404bef6-0.
INFO 03-01 23:57:33 [logger.py:42] Received request cmpl-a18b9999b43b4554baeaa634700e9c23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:33 [async_llm.py:261] Added request cmpl-a18b9999b43b4554baeaa634700e9c23-0.
INFO 03-01 23:57:34 [logger.py:42] Received request cmpl-7d19db66014c4708a3c58b0969f4025d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:34 [async_llm.py:261] Added request cmpl-7d19db66014c4708a3c58b0969f4025d-0.
INFO 03-01 23:57:35 [logger.py:42] Received request cmpl-d74725910bbf4198abbf56cea302a266-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:35 [async_llm.py:261] Added request cmpl-d74725910bbf4198abbf56cea302a266-0.
INFO 03-01 23:57:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:36 [logger.py:42] Received request cmpl-d355e809efe8471893efcd59f775af63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:36 [async_llm.py:261] Added request cmpl-d355e809efe8471893efcd59f775af63-0.
INFO 03-01 23:57:37 [logger.py:42] Received request cmpl-f751429693d7488690c915a78f2f9560-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:37 [async_llm.py:261] Added request cmpl-f751429693d7488690c915a78f2f9560-0.
INFO 03-01 23:57:38 [logger.py:42] Received request cmpl-5cd3cd2de7ca4be29051aeaf2a975442-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:38 [async_llm.py:261] Added request cmpl-5cd3cd2de7ca4be29051aeaf2a975442-0.
INFO 03-01 23:57:39 [logger.py:42] Received request cmpl-0ec0b5236da24f909beda79448035069-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:39 [async_llm.py:261] Added request cmpl-0ec0b5236da24f909beda79448035069-0.
INFO 03-01 23:57:41 [logger.py:42] Received request cmpl-2d04082bb8e2460a84276182f896f893-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:41 [async_llm.py:261] Added request cmpl-2d04082bb8e2460a84276182f896f893-0.
INFO 03-01 23:57:42 [logger.py:42] Received request cmpl-5c6ce50df60a41e7a71453bcdee1a901-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:42 [async_llm.py:261] Added request cmpl-5c6ce50df60a41e7a71453bcdee1a901-0.
INFO 03-01 23:57:43 [logger.py:42] Received request cmpl-9889a8f4201142448886b66765a949dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:43 [async_llm.py:261] Added request cmpl-9889a8f4201142448886b66765a949dc-0.
INFO 03-01 23:57:44 [logger.py:42] Received request cmpl-79bc42ed133b4c1c9e6ad4635fa0de7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:44 [async_llm.py:261] Added request cmpl-79bc42ed133b4c1c9e6ad4635fa0de7b-0.
INFO 03-01 23:57:45 [logger.py:42] Received request cmpl-7e99ce50d8904e3aaa1bf5e2a6d25ffa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:45 [async_llm.py:261] Added request cmpl-7e99ce50d8904e3aaa1bf5e2a6d25ffa-0.
INFO 03-01 23:57:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:46 [logger.py:42] Received request cmpl-d4bfc355c2c640ffb0c20d5fe14088a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:46 [async_llm.py:261] Added request cmpl-d4bfc355c2c640ffb0c20d5fe14088a5-0.
INFO 03-01 23:57:47 [logger.py:42] Received request cmpl-546144e269c0424494f5b4f33116d1eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:47 [async_llm.py:261] Added request cmpl-546144e269c0424494f5b4f33116d1eb-0.
INFO 03-01 23:57:48 [logger.py:42] Received request cmpl-ad9c2c48b5d642f4b1de09c3136b9032-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:48 [async_llm.py:261] Added request cmpl-ad9c2c48b5d642f4b1de09c3136b9032-0.
INFO 03-01 23:57:49 [logger.py:42] Received request cmpl-31ca21fb47714ee9b67c2f2d708e0eab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:49 [async_llm.py:261] Added request cmpl-31ca21fb47714ee9b67c2f2d708e0eab-0.
INFO 03-01 23:57:50 [logger.py:42] Received request cmpl-122d737f238b4056925fa86d9409dba3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:50 [async_llm.py:261] Added request cmpl-122d737f238b4056925fa86d9409dba3-0.
INFO 03-01 23:57:51 [logger.py:42] Received request cmpl-71fc1647f83d49c2a6f7a18c4d534944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:51 [async_llm.py:261] Added request cmpl-71fc1647f83d49c2a6f7a18c4d534944-0.
INFO 03-01 23:57:53 [logger.py:42] Received request cmpl-f28b553c2d574773a9435ddacfa41ca2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:53 [async_llm.py:261] Added request cmpl-f28b553c2d574773a9435ddacfa41ca2-0.
INFO 03-01 23:57:54 [logger.py:42] Received request cmpl-02b065163b6f4f5a81a8410b07326bca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:54 [async_llm.py:261] Added request cmpl-02b065163b6f4f5a81a8410b07326bca-0.
INFO 03-01 23:57:55 [logger.py:42] Received request cmpl-d9c0f513756340d68e4c113e4f88ae94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:55 [async_llm.py:261] Added request cmpl-d9c0f513756340d68e4c113e4f88ae94-0.
INFO 03-01 23:57:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:57:56 [logger.py:42] Received request cmpl-055d723dcb5b46558608081114e5b19b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:56 [async_llm.py:261] Added request cmpl-055d723dcb5b46558608081114e5b19b-0.
INFO 03-01 23:57:57 [logger.py:42] Received request cmpl-b81bbc83430c4a5696c746bc60d8a299-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:57 [async_llm.py:261] Added request cmpl-b81bbc83430c4a5696c746bc60d8a299-0.
INFO 03-01 23:57:58 [logger.py:42] Received request cmpl-a96dbdbdb7af4a6db6eaea927720c3d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:58 [async_llm.py:261] Added request cmpl-a96dbdbdb7af4a6db6eaea927720c3d0-0.
INFO 03-01 23:57:59 [logger.py:42] Received request cmpl-004f2360bc0f4e69af525623bda3a747-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:57:59 [async_llm.py:261] Added request cmpl-004f2360bc0f4e69af525623bda3a747-0.
INFO 03-01 23:58:00 [logger.py:42] Received request cmpl-7ec4f409b6db4b3e84c7ff6a079ab585-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:00 [async_llm.py:261] Added request cmpl-7ec4f409b6db4b3e84c7ff6a079ab585-0.
INFO 03-01 23:58:01 [logger.py:42] Received request cmpl-1da708a259b741ba9588c357bdc47773-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:01 [async_llm.py:261] Added request cmpl-1da708a259b741ba9588c357bdc47773-0.
INFO 03-01 23:58:02 [logger.py:42] Received request cmpl-4da8e83119c7451d8c48ba5820559cc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:02 [async_llm.py:261] Added request cmpl-4da8e83119c7451d8c48ba5820559cc0-0.
INFO 03-01 23:58:04 [logger.py:42] Received request cmpl-c30f738d1c714d93b572347a0b676265-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:04 [async_llm.py:261] Added request cmpl-c30f738d1c714d93b572347a0b676265-0.
INFO 03-01 23:58:05 [logger.py:42] Received request cmpl-8042517736984a7da9fee7818e48b2e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:05 [async_llm.py:261] Added request cmpl-8042517736984a7da9fee7818e48b2e9-0.
INFO 03-01 23:58:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:06 [logger.py:42] Received request cmpl-2638c033917f456896b9ef69269b21d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:06 [async_llm.py:261] Added request cmpl-2638c033917f456896b9ef69269b21d8-0.
INFO 03-01 23:58:07 [logger.py:42] Received request cmpl-4bd140f437e24c1d8c96a4073943d9cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:07 [async_llm.py:261] Added request cmpl-4bd140f437e24c1d8c96a4073943d9cd-0.
INFO 03-01 23:58:08 [logger.py:42] Received request cmpl-714a25f4f4264cca9c98a83b960c465c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:08 [async_llm.py:261] Added request cmpl-714a25f4f4264cca9c98a83b960c465c-0.
INFO 03-01 23:58:09 [logger.py:42] Received request cmpl-9ba585cc76df4a52979ee53ec31f2f68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:09 [async_llm.py:261] Added request cmpl-9ba585cc76df4a52979ee53ec31f2f68-0.
INFO 03-01 23:58:10 [logger.py:42] Received request cmpl-46d4758298c5430db3133999dc090f8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:10 [async_llm.py:261] Added request cmpl-46d4758298c5430db3133999dc090f8d-0.
INFO 03-01 23:58:11 [logger.py:42] Received request cmpl-d1e8bfb6c64e4bcdb571639b4ac826c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:11 [async_llm.py:261] Added request cmpl-d1e8bfb6c64e4bcdb571639b4ac826c8-0.
INFO 03-01 23:58:12 [logger.py:42] Received request cmpl-40bf3409a1654343a00e441896a0d8f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:12 [async_llm.py:261] Added request cmpl-40bf3409a1654343a00e441896a0d8f9-0.
INFO 03-01 23:58:13 [logger.py:42] Received request cmpl-19e747a37dc4442fadbcf95af9d505ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:13 [async_llm.py:261] Added request cmpl-19e747a37dc4442fadbcf95af9d505ae-0.
INFO 03-01 23:58:14 [logger.py:42] Received request cmpl-c0679115b68e457786abf2d444fdef00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:14 [async_llm.py:261] Added request cmpl-c0679115b68e457786abf2d444fdef00-0.
INFO 03-01 23:58:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:16 [logger.py:42] Received request cmpl-e3ec1e83452c4ba1a0501f67b89ddf68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:16 [async_llm.py:261] Added request cmpl-e3ec1e83452c4ba1a0501f67b89ddf68-0.
INFO 03-01 23:58:17 [logger.py:42] Received request cmpl-fb015fd5e4b1495ead65f1164acb28d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:17 [async_llm.py:261] Added request cmpl-fb015fd5e4b1495ead65f1164acb28d8-0.
INFO 03-01 23:58:18 [logger.py:42] Received request cmpl-beb70658fa2948c88b39a46dd791cf4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:18 [async_llm.py:261] Added request cmpl-beb70658fa2948c88b39a46dd791cf4d-0.
INFO 03-01 23:58:19 [logger.py:42] Received request cmpl-80456c8458af4bc891710d7b188d3add-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:19 [async_llm.py:261] Added request cmpl-80456c8458af4bc891710d7b188d3add-0.
INFO 03-01 23:58:20 [logger.py:42] Received request cmpl-48b1f20fa2a04261aaa01bacfa3db5ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:20 [async_llm.py:261] Added request cmpl-48b1f20fa2a04261aaa01bacfa3db5ab-0.
INFO 03-01 23:58:21 [logger.py:42] Received request cmpl-998d4a95b773409c9d30ab44263eee29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:21 [async_llm.py:261] Added request cmpl-998d4a95b773409c9d30ab44263eee29-0.
INFO 03-01 23:58:22 [logger.py:42] Received request cmpl-14fa31d62d3c4fb6b3d65e796f791ca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:22 [async_llm.py:261] Added request cmpl-14fa31d62d3c4fb6b3d65e796f791ca9-0.
INFO 03-01 23:58:23 [logger.py:42] Received request cmpl-08a175b744ff4e4e9da35fcd7bd28c0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:23 [async_llm.py:261] Added request cmpl-08a175b744ff4e4e9da35fcd7bd28c0f-0.
INFO 03-01 23:58:24 [logger.py:42] Received request cmpl-a51f7b168d9647e3a963ce62f9503f40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:24 [async_llm.py:261] Added request cmpl-a51f7b168d9647e3a963ce62f9503f40-0.
INFO 03-01 23:58:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:25 [logger.py:42] Received request cmpl-367c4ba9e11d4999968c2256e82abb0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:25 [async_llm.py:261] Added request cmpl-367c4ba9e11d4999968c2256e82abb0e-0.
INFO 03-01 23:58:27 [logger.py:42] Received request cmpl-2f628c5b18e8459c962d199dbec60a14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:27 [async_llm.py:261] Added request cmpl-2f628c5b18e8459c962d199dbec60a14-0.
INFO 03-01 23:58:28 [logger.py:42] Received request cmpl-7960ec3da5484e31a4e4974dfc8ab393-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:28 [async_llm.py:261] Added request cmpl-7960ec3da5484e31a4e4974dfc8ab393-0.
INFO 03-01 23:58:29 [logger.py:42] Received request cmpl-d1d91e31c7f3427f94f81bc6ba34e240-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:29 [async_llm.py:261] Added request cmpl-d1d91e31c7f3427f94f81bc6ba34e240-0.
INFO 03-01 23:58:30 [logger.py:42] Received request cmpl-51fa0c70b4854a7a87a552f8c9719c77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:30 [async_llm.py:261] Added request cmpl-51fa0c70b4854a7a87a552f8c9719c77-0.
INFO 03-01 23:58:31 [logger.py:42] Received request cmpl-30676d947d5941c1a0f2df0d267a500b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:31 [async_llm.py:261] Added request cmpl-30676d947d5941c1a0f2df0d267a500b-0.
INFO 03-01 23:58:32 [logger.py:42] Received request cmpl-8f19ce51503345a9bda1d4ad40ec51be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:32 [async_llm.py:261] Added request cmpl-8f19ce51503345a9bda1d4ad40ec51be-0.
INFO 03-01 23:58:33 [logger.py:42] Received request cmpl-37fc8369510949388654cfe3b60d594f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:33 [async_llm.py:261] Added request cmpl-37fc8369510949388654cfe3b60d594f-0.
INFO 03-01 23:58:34 [logger.py:42] Received request cmpl-740f2182d00e43cbbd1638f77c12c622-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:34 [async_llm.py:261] Added request cmpl-740f2182d00e43cbbd1638f77c12c622-0.
INFO 03-01 23:58:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:35 [logger.py:42] Received request cmpl-0383d79a1cd647a2bde5abd35327386a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:35 [async_llm.py:261] Added request cmpl-0383d79a1cd647a2bde5abd35327386a-0.
INFO 03-01 23:58:36 [logger.py:42] Received request cmpl-5d604c204dba4c08a6a879bd505fea88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:36 [async_llm.py:261] Added request cmpl-5d604c204dba4c08a6a879bd505fea88-0.
INFO 03-01 23:58:37 [logger.py:42] Received request cmpl-9e15dccdf9ac4c1796d9443fb2f5e222-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:37 [async_llm.py:261] Added request cmpl-9e15dccdf9ac4c1796d9443fb2f5e222-0.
INFO 03-01 23:58:39 [logger.py:42] Received request cmpl-ba76a91ec1104b0391b0d2d3f7ec9a2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:39 [async_llm.py:261] Added request cmpl-ba76a91ec1104b0391b0d2d3f7ec9a2d-0.
INFO 03-01 23:58:40 [logger.py:42] Received request cmpl-4c827980d895416189c97b74d024cb58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:40 [async_llm.py:261] Added request cmpl-4c827980d895416189c97b74d024cb58-0.
INFO 03-01 23:58:41 [logger.py:42] Received request cmpl-2eff04f03f704d21af57d9b118a9ee1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:41 [async_llm.py:261] Added request cmpl-2eff04f03f704d21af57d9b118a9ee1a-0.
INFO 03-01 23:58:42 [logger.py:42] Received request cmpl-5d6105d54cf248f5a9b3a150806e6268-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:42 [async_llm.py:261] Added request cmpl-5d6105d54cf248f5a9b3a150806e6268-0.
INFO 03-01 23:58:43 [logger.py:42] Received request cmpl-a436e65569a44787a4d4deaedfdce60d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:43 [async_llm.py:261] Added request cmpl-a436e65569a44787a4d4deaedfdce60d-0.
INFO 03-01 23:58:44 [logger.py:42] Received request cmpl-1c6f0781efb64dda9050027354971168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:44 [async_llm.py:261] Added request cmpl-1c6f0781efb64dda9050027354971168-0.
INFO 03-01 23:58:45 [logger.py:42] Received request cmpl-3400db7764fc4dd98f6b0e226c3f7155-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:45 [async_llm.py:261] Added request cmpl-3400db7764fc4dd98f6b0e226c3f7155-0.
INFO 03-01 23:58:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:46 [logger.py:42] Received request cmpl-f8174dc62ef2449fb0205587dadc49f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:46 [async_llm.py:261] Added request cmpl-f8174dc62ef2449fb0205587dadc49f1-0.
INFO 03-01 23:58:47 [logger.py:42] Received request cmpl-294d4e0c999d435f9c2f934f8220d2e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:47 [async_llm.py:261] Added request cmpl-294d4e0c999d435f9c2f934f8220d2e3-0.
INFO 03-01 23:58:48 [logger.py:42] Received request cmpl-ec89c20d95bb4569a11132aa4a536297-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:48 [async_llm.py:261] Added request cmpl-ec89c20d95bb4569a11132aa4a536297-0.
INFO 03-01 23:58:50 [logger.py:42] Received request cmpl-6815b0569e9e4312ae0ac9c8abfbece9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:50 [async_llm.py:261] Added request cmpl-6815b0569e9e4312ae0ac9c8abfbece9-0.
INFO 03-01 23:58:51 [logger.py:42] Received request cmpl-91308e83e274450ea6b492143ac0bd74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:51 [async_llm.py:261] Added request cmpl-91308e83e274450ea6b492143ac0bd74-0.
INFO 03-01 23:58:52 [logger.py:42] Received request cmpl-9f49ee22eeeb45e89430af24b47e0e90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:52 [async_llm.py:261] Added request cmpl-9f49ee22eeeb45e89430af24b47e0e90-0.
INFO 03-01 23:58:53 [logger.py:42] Received request cmpl-bf6436027b2e4a8ca12d3cb86936454a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:53 [async_llm.py:261] Added request cmpl-bf6436027b2e4a8ca12d3cb86936454a-0.
INFO 03-01 23:58:54 [logger.py:42] Received request cmpl-aee505d9f3844de58994092c8ee972de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:54 [async_llm.py:261] Added request cmpl-aee505d9f3844de58994092c8ee972de-0.
INFO 03-01 23:58:55 [logger.py:42] Received request cmpl-53951c10323d4ef8b1a6d6c3df240d7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:55 [async_llm.py:261] Added request cmpl-53951c10323d4ef8b1a6d6c3df240d7a-0.
INFO 03-01 23:58:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:58:56 [logger.py:42] Received request cmpl-26f3c2999cb14f59bf961f872fd7ecb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:56 [async_llm.py:261] Added request cmpl-26f3c2999cb14f59bf961f872fd7ecb0-0.
INFO 03-01 23:58:57 [logger.py:42] Received request cmpl-883213e7585d4493ac18906242e652a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:57 [async_llm.py:261] Added request cmpl-883213e7585d4493ac18906242e652a4-0.
INFO 03-01 23:58:58 [logger.py:42] Received request cmpl-a738e6f472f54e0dbc8f593d2110247a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:58 [async_llm.py:261] Added request cmpl-a738e6f472f54e0dbc8f593d2110247a-0.
INFO 03-01 23:58:59 [logger.py:42] Received request cmpl-fd4248403a7b4721a71d2d3e150b1568-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:58:59 [async_llm.py:261] Added request cmpl-fd4248403a7b4721a71d2d3e150b1568-0.
INFO 03-01 23:59:00 [logger.py:42] Received request cmpl-1df8d685a8e94116b1d75b2664dad22c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:00 [async_llm.py:261] Added request cmpl-1df8d685a8e94116b1d75b2664dad22c-0.
INFO 03-01 23:59:02 [logger.py:42] Received request cmpl-80711e39d9c84133a4b5a2f29c1a1860-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:02 [async_llm.py:261] Added request cmpl-80711e39d9c84133a4b5a2f29c1a1860-0.
INFO 03-01 23:59:03 [logger.py:42] Received request cmpl-71b5da57afb94cb7a1fdabe4a228fda8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:03 [async_llm.py:261] Added request cmpl-71b5da57afb94cb7a1fdabe4a228fda8-0.
INFO 03-01 23:59:04 [logger.py:42] Received request cmpl-706c66db32314c12817ab401c6698bba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:04 [async_llm.py:261] Added request cmpl-706c66db32314c12817ab401c6698bba-0.
INFO 03-01 23:59:05 [logger.py:42] Received request cmpl-fc7fcee06ce3458e95ee4a0138f62c9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:05 [async_llm.py:261] Added request cmpl-fc7fcee06ce3458e95ee4a0138f62c9b-0.
INFO 03-01 23:59:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:06 [logger.py:42] Received request cmpl-f60f4ac1d0ba4e6baa76407af5ff8274-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:06 [async_llm.py:261] Added request cmpl-f60f4ac1d0ba4e6baa76407af5ff8274-0.
INFO 03-01 23:59:07 [logger.py:42] Received request cmpl-29f48e0c790f41ad8ad5161cc1d8a2b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:07 [async_llm.py:261] Added request cmpl-29f48e0c790f41ad8ad5161cc1d8a2b6-0.
INFO 03-01 23:59:08 [logger.py:42] Received request cmpl-f35efb2825a842aa935e84639cfdfa07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:08 [async_llm.py:261] Added request cmpl-f35efb2825a842aa935e84639cfdfa07-0.
INFO 03-01 23:59:09 [logger.py:42] Received request cmpl-710710b3f3ee488ca0fb376219893350-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:09 [async_llm.py:261] Added request cmpl-710710b3f3ee488ca0fb376219893350-0.
INFO 03-01 23:59:10 [logger.py:42] Received request cmpl-bdf0e06d8b5b453881178674a2aca762-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:10 [async_llm.py:261] Added request cmpl-bdf0e06d8b5b453881178674a2aca762-0.
INFO 03-01 23:59:11 [logger.py:42] Received request cmpl-c6e7126db5854ba49347305998d1b764-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:11 [async_llm.py:261] Added request cmpl-c6e7126db5854ba49347305998d1b764-0.
INFO 03-01 23:59:12 [logger.py:42] Received request cmpl-d3f39d10fb3c4ba8be3daff04ac99636-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:12 [async_llm.py:261] Added request cmpl-d3f39d10fb3c4ba8be3daff04ac99636-0.
INFO 03-01 23:59:14 [logger.py:42] Received request cmpl-59ad16078bdf442c97801428b9b6fbc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:14 [async_llm.py:261] Added request cmpl-59ad16078bdf442c97801428b9b6fbc4-0.
INFO 03-01 23:59:15 [logger.py:42] Received request cmpl-6449898984cd41c88bf474b7eb7852a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:15 [async_llm.py:261] Added request cmpl-6449898984cd41c88bf474b7eb7852a6-0.
INFO 03-01 23:59:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:16 [logger.py:42] Received request cmpl-46f06dd942a1430593b8b25e557b2b2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:16 [async_llm.py:261] Added request cmpl-46f06dd942a1430593b8b25e557b2b2c-0.
INFO 03-01 23:59:17 [logger.py:42] Received request cmpl-66b958eaebcd41a185a37515c547e6a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:17 [async_llm.py:261] Added request cmpl-66b958eaebcd41a185a37515c547e6a7-0.
INFO 03-01 23:59:18 [logger.py:42] Received request cmpl-97c4f6369faa44da81d135814b470b35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:18 [async_llm.py:261] Added request cmpl-97c4f6369faa44da81d135814b470b35-0.
INFO 03-01 23:59:19 [logger.py:42] Received request cmpl-73be9fcf5e4048df9cfde2678d28fcc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:19 [async_llm.py:261] Added request cmpl-73be9fcf5e4048df9cfde2678d28fcc5-0.
INFO 03-01 23:59:20 [logger.py:42] Received request cmpl-499f00eb52c8484a848c8975764531c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:20 [async_llm.py:261] Added request cmpl-499f00eb52c8484a848c8975764531c8-0.
INFO 03-01 23:59:21 [logger.py:42] Received request cmpl-a491829b08a147f9932aa3fc3997ac24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:21 [async_llm.py:261] Added request cmpl-a491829b08a147f9932aa3fc3997ac24-0.
INFO 03-01 23:59:22 [logger.py:42] Received request cmpl-1d121df13b2f44f6bf890a69ea195aef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:22 [async_llm.py:261] Added request cmpl-1d121df13b2f44f6bf890a69ea195aef-0.
INFO 03-01 23:59:23 [logger.py:42] Received request cmpl-bbd9542c7ac74b5ca864f3327674a37e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:23 [async_llm.py:261] Added request cmpl-bbd9542c7ac74b5ca864f3327674a37e-0.
INFO 03-01 23:59:25 [logger.py:42] Received request cmpl-297b85c7c4d7461daa5bfec04665b2a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:25 [async_llm.py:261] Added request cmpl-297b85c7c4d7461daa5bfec04665b2a1-0.
INFO 03-01 23:59:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:26 [logger.py:42] Received request cmpl-c604546d784446a9bb5cf5844cc5f325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:26 [async_llm.py:261] Added request cmpl-c604546d784446a9bb5cf5844cc5f325-0.
INFO 03-01 23:59:27 [logger.py:42] Received request cmpl-49e4dc6c415a45e8a9a59bc960553b94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:27 [async_llm.py:261] Added request cmpl-49e4dc6c415a45e8a9a59bc960553b94-0.
INFO 03-01 23:59:28 [logger.py:42] Received request cmpl-8cfa9e9a2af34c2092b6568e0f22108f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:28 [async_llm.py:261] Added request cmpl-8cfa9e9a2af34c2092b6568e0f22108f-0.
INFO 03-01 23:59:29 [logger.py:42] Received request cmpl-495cd28342254938a17cb5a3d86412ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:29 [async_llm.py:261] Added request cmpl-495cd28342254938a17cb5a3d86412ef-0.
INFO 03-01 23:59:30 [logger.py:42] Received request cmpl-b518eb8be9394a9a95696aee5afb0b3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:30 [async_llm.py:261] Added request cmpl-b518eb8be9394a9a95696aee5afb0b3a-0.
INFO 03-01 23:59:31 [logger.py:42] Received request cmpl-04f654cebe994959b06de7bddea180d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:31 [async_llm.py:261] Added request cmpl-04f654cebe994959b06de7bddea180d6-0.
INFO 03-01 23:59:32 [logger.py:42] Received request cmpl-385570ad6c9a4b84b4bf086dcf810657-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:32 [async_llm.py:261] Added request cmpl-385570ad6c9a4b84b4bf086dcf810657-0.
INFO 03-01 23:59:33 [logger.py:42] Received request cmpl-62a78d116beb4a2790b5828a52324208-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:33 [async_llm.py:261] Added request cmpl-62a78d116beb4a2790b5828a52324208-0.
INFO 03-01 23:59:34 [logger.py:42] Received request cmpl-df02f9ad4a0448a0ae6cecca6ef59b28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:34 [async_llm.py:261] Added request cmpl-df02f9ad4a0448a0ae6cecca6ef59b28-0.
INFO 03-01 23:59:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:35 [logger.py:42] Received request cmpl-65648777c7874f5cad5916b0c51cffae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:35 [async_llm.py:261] Added request cmpl-65648777c7874f5cad5916b0c51cffae-0.
INFO 03-01 23:59:37 [logger.py:42] Received request cmpl-2f98ca3556c54a6d91dcb640d4bac825-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:37 [async_llm.py:261] Added request cmpl-2f98ca3556c54a6d91dcb640d4bac825-0.
INFO 03-01 23:59:38 [logger.py:42] Received request cmpl-ba471e414866461abf5921c3684630af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:38 [async_llm.py:261] Added request cmpl-ba471e414866461abf5921c3684630af-0.
INFO 03-01 23:59:39 [logger.py:42] Received request cmpl-9d8783ed79da4ed986872d8b88083df4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:39 [async_llm.py:261] Added request cmpl-9d8783ed79da4ed986872d8b88083df4-0.
INFO 03-01 23:59:40 [logger.py:42] Received request cmpl-935c1cec4a344261ac5b27c6b797f0c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:40 [async_llm.py:261] Added request cmpl-935c1cec4a344261ac5b27c6b797f0c8-0.
INFO 03-01 23:59:41 [logger.py:42] Received request cmpl-5a6d5d34cffc47749d979f8a44b0d959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:41 [async_llm.py:261] Added request cmpl-5a6d5d34cffc47749d979f8a44b0d959-0.
INFO 03-01 23:59:42 [logger.py:42] Received request cmpl-699496641a094eebaf2947caecb8125d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:42 [async_llm.py:261] Added request cmpl-699496641a094eebaf2947caecb8125d-0.
INFO 03-01 23:59:43 [logger.py:42] Received request cmpl-e965b403b8364db490363ac02c08fb57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:43 [async_llm.py:261] Added request cmpl-e965b403b8364db490363ac02c08fb57-0.
INFO 03-01 23:59:44 [logger.py:42] Received request cmpl-8a950524d5ed4783900bdbfae6c0be85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:44 [async_llm.py:261] Added request cmpl-8a950524d5ed4783900bdbfae6c0be85-0.
INFO 03-01 23:59:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:45 [logger.py:42] Received request cmpl-da4871a110554281b85d5966bc141c02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:45 [async_llm.py:261] Added request cmpl-da4871a110554281b85d5966bc141c02-0.
INFO 03-01 23:59:46 [logger.py:42] Received request cmpl-8187965ca5524fe5b2c4d45e9409019e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:46 [async_llm.py:261] Added request cmpl-8187965ca5524fe5b2c4d45e9409019e-0.
INFO 03-01 23:59:47 [logger.py:42] Received request cmpl-22d694bba643429aaa18bc4e66030b13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:47 [async_llm.py:261] Added request cmpl-22d694bba643429aaa18bc4e66030b13-0.
INFO 03-01 23:59:49 [logger.py:42] Received request cmpl-d87a7d05484241688f333e39917afd37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:49 [async_llm.py:261] Added request cmpl-d87a7d05484241688f333e39917afd37-0.
INFO 03-01 23:59:50 [logger.py:42] Received request cmpl-d582ee4bc651480d94e3f9064ea9d051-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:50 [async_llm.py:261] Added request cmpl-d582ee4bc651480d94e3f9064ea9d051-0.
INFO 03-01 23:59:51 [logger.py:42] Received request cmpl-783e63460387466c8f679a33936ac508-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:51 [async_llm.py:261] Added request cmpl-783e63460387466c8f679a33936ac508-0.
INFO 03-01 23:59:52 [logger.py:42] Received request cmpl-75cac3138a954a56ace35950ce1e2b6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:52 [async_llm.py:261] Added request cmpl-75cac3138a954a56ace35950ce1e2b6f-0.
INFO 03-01 23:59:53 [logger.py:42] Received request cmpl-766cea4253a549e4ad2e02820573e53a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:53 [async_llm.py:261] Added request cmpl-766cea4253a549e4ad2e02820573e53a-0.
INFO 03-01 23:59:54 [logger.py:42] Received request cmpl-cc63f44792d94517b5b7f7d124710b54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:54 [async_llm.py:261] Added request cmpl-cc63f44792d94517b5b7f7d124710b54-0.
INFO 03-01 23:59:55 [logger.py:42] Received request cmpl-dbe5acb7351b4384a63a1b306c5122ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:55 [async_llm.py:261] Added request cmpl-dbe5acb7351b4384a63a1b306c5122ec-0.
INFO 03-01 23:59:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-01 23:59:56 [logger.py:42] Received request cmpl-75952d64e27141f0a4dab666986f1bb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:56 [async_llm.py:261] Added request cmpl-75952d64e27141f0a4dab666986f1bb6-0.
INFO 03-01 23:59:57 [logger.py:42] Received request cmpl-76b4ed4ad2004a468d0fc4ebafb9ff47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:57 [async_llm.py:261] Added request cmpl-76b4ed4ad2004a468d0fc4ebafb9ff47-0.
INFO 03-01 23:59:58 [logger.py:42] Received request cmpl-800f2df9bed744c292462198af1f34c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:58 [async_llm.py:261] Added request cmpl-800f2df9bed744c292462198af1f34c0-0.
INFO 03-01 23:59:59 [logger.py:42] Received request cmpl-5aab2c094af94c2fb0e100d059d277cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-01 23:59:59 [async_llm.py:261] Added request cmpl-5aab2c094af94c2fb0e100d059d277cb-0.
INFO 03-02 00:00:01 [logger.py:42] Received request cmpl-59a506cc819f496ea523d65e86447009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:01 [async_llm.py:261] Added request cmpl-59a506cc819f496ea523d65e86447009-0.
INFO 03-02 00:00:02 [logger.py:42] Received request cmpl-83e747a45f4847d49c4c8072a603c054-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:02 [async_llm.py:261] Added request cmpl-83e747a45f4847d49c4c8072a603c054-0.
INFO 03-02 00:00:03 [logger.py:42] Received request cmpl-3be9f32fd55b4ea2a17f9c9cfe6d6076-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:03 [async_llm.py:261] Added request cmpl-3be9f32fd55b4ea2a17f9c9cfe6d6076-0.
INFO 03-02 00:00:04 [logger.py:42] Received request cmpl-04901e90ef3e4d248f55775933c40340-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:04 [async_llm.py:261] Added request cmpl-04901e90ef3e4d248f55775933c40340-0.
INFO 03-02 00:00:05 [logger.py:42] Received request cmpl-d6efbb1cc95f40ffaa66083da116c40a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:05 [async_llm.py:261] Added request cmpl-d6efbb1cc95f40ffaa66083da116c40a-0.
INFO 03-02 00:00:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:06 [logger.py:42] Received request cmpl-aa07e688ef644a7a962aeeb8a9e8ec76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:06 [async_llm.py:261] Added request cmpl-aa07e688ef644a7a962aeeb8a9e8ec76-0.
INFO 03-02 00:00:07 [logger.py:42] Received request cmpl-9c1d4256ff0245aabb37cf4bbc3ed084-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:07 [async_llm.py:261] Added request cmpl-9c1d4256ff0245aabb37cf4bbc3ed084-0.
INFO 03-02 00:00:08 [logger.py:42] Received request cmpl-2d7ca8b2f52d47948008ae1e0094075b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:08 [async_llm.py:261] Added request cmpl-2d7ca8b2f52d47948008ae1e0094075b-0.
INFO 03-02 00:00:09 [logger.py:42] Received request cmpl-800d5ab8a1114bf694580fa9093e1c11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:09 [async_llm.py:261] Added request cmpl-800d5ab8a1114bf694580fa9093e1c11-0.
INFO 03-02 00:00:10 [logger.py:42] Received request cmpl-446354dfc44b49a684b1da022441c1da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:10 [async_llm.py:261] Added request cmpl-446354dfc44b49a684b1da022441c1da-0.
INFO 03-02 00:00:11 [logger.py:42] Received request cmpl-cc75a57a481e483193c48949a1b25ffa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:11 [async_llm.py:261] Added request cmpl-cc75a57a481e483193c48949a1b25ffa-0.
INFO 03-02 00:00:13 [logger.py:42] Received request cmpl-443bbda9ad984bff9b1511fad520c032-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:13 [async_llm.py:261] Added request cmpl-443bbda9ad984bff9b1511fad520c032-0.
INFO 03-02 00:00:14 [logger.py:42] Received request cmpl-db27412fb9444dfc8994d3e1e8d5e631-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:14 [async_llm.py:261] Added request cmpl-db27412fb9444dfc8994d3e1e8d5e631-0.
INFO 03-02 00:00:15 [logger.py:42] Received request cmpl-90ff725c36ca4a8eab9b6eb30c2bc7e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:15 [async_llm.py:261] Added request cmpl-90ff725c36ca4a8eab9b6eb30c2bc7e9-0.
INFO 03-02 00:00:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:16 [logger.py:42] Received request cmpl-0f491cdde1f64ed5a61e90b56f02b81c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:16 [async_llm.py:261] Added request cmpl-0f491cdde1f64ed5a61e90b56f02b81c-0.
INFO 03-02 00:00:17 [logger.py:42] Received request cmpl-86938bf5c8c044dbb6ed8d5f69af799a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:17 [async_llm.py:261] Added request cmpl-86938bf5c8c044dbb6ed8d5f69af799a-0.
INFO 03-02 00:00:18 [logger.py:42] Received request cmpl-8bceda01041941d98718327c264accb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:18 [async_llm.py:261] Added request cmpl-8bceda01041941d98718327c264accb0-0.
INFO 03-02 00:00:19 [logger.py:42] Received request cmpl-aad319faa50d413f9d13509e96818c43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:19 [async_llm.py:261] Added request cmpl-aad319faa50d413f9d13509e96818c43-0.
INFO 03-02 00:00:20 [logger.py:42] Received request cmpl-d28059c22e6a4ac8a03dcb02bf77ddd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:20 [async_llm.py:261] Added request cmpl-d28059c22e6a4ac8a03dcb02bf77ddd9-0.
INFO 03-02 00:00:21 [logger.py:42] Received request cmpl-4cb66e572edc48c3bbd05197bb898e8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:21 [async_llm.py:261] Added request cmpl-4cb66e572edc48c3bbd05197bb898e8a-0.
INFO 03-02 00:00:22 [logger.py:42] Received request cmpl-47e793893b6842bf89cc5a3d262b3440-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:22 [async_llm.py:261] Added request cmpl-47e793893b6842bf89cc5a3d262b3440-0.
INFO 03-02 00:00:24 [logger.py:42] Received request cmpl-dc68b7e99a5845d7a32043e43b94e35f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:24 [async_llm.py:261] Added request cmpl-dc68b7e99a5845d7a32043e43b94e35f-0.
INFO 03-02 00:00:25 [logger.py:42] Received request cmpl-46ce4a04899248b1a956d2ba2c7f0726-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:25 [async_llm.py:261] Added request cmpl-46ce4a04899248b1a956d2ba2c7f0726-0.
INFO 03-02 00:00:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:26 [logger.py:42] Received request cmpl-dc211b8ac22140d285e428b8d2e173b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:26 [async_llm.py:261] Added request cmpl-dc211b8ac22140d285e428b8d2e173b2-0.
INFO 03-02 00:00:27 [logger.py:42] Received request cmpl-bb1df8ae581b49f094a0af0c101e0b0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:27 [async_llm.py:261] Added request cmpl-bb1df8ae581b49f094a0af0c101e0b0b-0.
INFO 03-02 00:00:28 [logger.py:42] Received request cmpl-79ead3597d2647e2b206a05cd9f6f3c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:28 [async_llm.py:261] Added request cmpl-79ead3597d2647e2b206a05cd9f6f3c7-0.
INFO 03-02 00:00:29 [logger.py:42] Received request cmpl-72a828231e944005b6a8c6a4e27ccd71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:29 [async_llm.py:261] Added request cmpl-72a828231e944005b6a8c6a4e27ccd71-0.
INFO 03-02 00:00:30 [logger.py:42] Received request cmpl-ab43bf3fa5b34e26b8804c2048716d40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:30 [async_llm.py:261] Added request cmpl-ab43bf3fa5b34e26b8804c2048716d40-0.
INFO 03-02 00:00:31 [logger.py:42] Received request cmpl-6a3aa2f2458a44008b721a21aecb01f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:31 [async_llm.py:261] Added request cmpl-6a3aa2f2458a44008b721a21aecb01f3-0.
INFO 03-02 00:00:32 [logger.py:42] Received request cmpl-3ab4713683c245d29eaeb5d273e9a739-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:32 [async_llm.py:261] Added request cmpl-3ab4713683c245d29eaeb5d273e9a739-0.
INFO 03-02 00:00:33 [logger.py:42] Received request cmpl-f4cf266a0d3843418def224925689d6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:33 [async_llm.py:261] Added request cmpl-f4cf266a0d3843418def224925689d6b-0.
INFO 03-02 00:00:34 [logger.py:42] Received request cmpl-9705bca0875c42e19942deae4c732d48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:34 [async_llm.py:261] Added request cmpl-9705bca0875c42e19942deae4c732d48-0.
INFO 03-02 00:00:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:36 [logger.py:42] Received request cmpl-b82648bbdcd74a7fb0a878f87e246515-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:36 [async_llm.py:261] Added request cmpl-b82648bbdcd74a7fb0a878f87e246515-0.
INFO 03-02 00:00:37 [logger.py:42] Received request cmpl-73b63b9a45fe4d6bb091a2d23e31764d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:37 [async_llm.py:261] Added request cmpl-73b63b9a45fe4d6bb091a2d23e31764d-0.
INFO 03-02 00:00:38 [logger.py:42] Received request cmpl-e28ecdac6806455dbb637b52300206d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:38 [async_llm.py:261] Added request cmpl-e28ecdac6806455dbb637b52300206d5-0.
INFO 03-02 00:00:39 [logger.py:42] Received request cmpl-f67074381cd04e95ae2fe52c8793dc0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:39 [async_llm.py:261] Added request cmpl-f67074381cd04e95ae2fe52c8793dc0f-0.
INFO 03-02 00:00:40 [logger.py:42] Received request cmpl-6cd6e30363a34e7790720c5089215821-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:40 [async_llm.py:261] Added request cmpl-6cd6e30363a34e7790720c5089215821-0.
INFO 03-02 00:00:41 [logger.py:42] Received request cmpl-d74bc463923e4456989a366f5e0c0b70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:41 [async_llm.py:261] Added request cmpl-d74bc463923e4456989a366f5e0c0b70-0.
INFO 03-02 00:00:42 [logger.py:42] Received request cmpl-c94ba61998eb43bf8227cbcfbc49f0a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:42 [async_llm.py:261] Added request cmpl-c94ba61998eb43bf8227cbcfbc49f0a2-0.
INFO 03-02 00:00:43 [logger.py:42] Received request cmpl-ebb3cf98ec22423bab54be6f2faae5c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:43 [async_llm.py:261] Added request cmpl-ebb3cf98ec22423bab54be6f2faae5c5-0.
INFO 03-02 00:00:44 [logger.py:42] Received request cmpl-0c4f832cabcc444dbe425dbf6a259951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:44 [async_llm.py:261] Added request cmpl-0c4f832cabcc444dbe425dbf6a259951-0.
INFO 03-02 00:00:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:45 [logger.py:42] Received request cmpl-c98a7389a6194f058c3dc6efc8245f6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:45 [async_llm.py:261] Added request cmpl-c98a7389a6194f058c3dc6efc8245f6e-0.
INFO 03-02 00:00:46 [logger.py:42] Received request cmpl-8ba6298f1c1746be9408ecb216fe1153-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:46 [async_llm.py:261] Added request cmpl-8ba6298f1c1746be9408ecb216fe1153-0.
INFO 03-02 00:00:48 [logger.py:42] Received request cmpl-222b4c2fc0ec4257932080fde50f0d4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:48 [async_llm.py:261] Added request cmpl-222b4c2fc0ec4257932080fde50f0d4f-0.
INFO 03-02 00:00:49 [logger.py:42] Received request cmpl-ee9cb803e05c45c29b37aec621a4274c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:49 [async_llm.py:261] Added request cmpl-ee9cb803e05c45c29b37aec621a4274c-0.
INFO 03-02 00:00:50 [logger.py:42] Received request cmpl-d718c3f1da9a4b5eacecf7b89a4ba6a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:50 [async_llm.py:261] Added request cmpl-d718c3f1da9a4b5eacecf7b89a4ba6a0-0.
INFO 03-02 00:00:51 [logger.py:42] Received request cmpl-8d4aaa7fd91f418cbfaf0f40bc4d1921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:51 [async_llm.py:261] Added request cmpl-8d4aaa7fd91f418cbfaf0f40bc4d1921-0.
INFO 03-02 00:00:52 [logger.py:42] Received request cmpl-42fabb2416ad41e1a20568f01db43210-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:52 [async_llm.py:261] Added request cmpl-42fabb2416ad41e1a20568f01db43210-0.
INFO 03-02 00:00:53 [logger.py:42] Received request cmpl-1dda011d14364c18930af9d2cbbaa362-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:53 [async_llm.py:261] Added request cmpl-1dda011d14364c18930af9d2cbbaa362-0.
INFO 03-02 00:00:54 [logger.py:42] Received request cmpl-5009ea1382d14faabc1d51e4337cad2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:54 [async_llm.py:261] Added request cmpl-5009ea1382d14faabc1d51e4337cad2a-0.
INFO 03-02 00:00:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:00:55 [logger.py:42] Received request cmpl-28d05dbe4fe24903ae8d0a6d95495395-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:55 [async_llm.py:261] Added request cmpl-28d05dbe4fe24903ae8d0a6d95495395-0.
INFO 03-02 00:00:56 [logger.py:42] Received request cmpl-04f8d311fe484d27b2d86c5a0dd8508d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:56 [async_llm.py:261] Added request cmpl-04f8d311fe484d27b2d86c5a0dd8508d-0.
INFO 03-02 00:00:57 [logger.py:42] Received request cmpl-0bfb9e5267f44af8902950745a735eae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:57 [async_llm.py:261] Added request cmpl-0bfb9e5267f44af8902950745a735eae-0.
INFO 03-02 00:00:59 [logger.py:42] Received request cmpl-40cb6d2236ce4f308cb110472e3898a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:00:59 [async_llm.py:261] Added request cmpl-40cb6d2236ce4f308cb110472e3898a2-0.
INFO 03-02 00:01:00 [logger.py:42] Received request cmpl-0896a968e7564a35a01cf81ee453ed7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:00 [async_llm.py:261] Added request cmpl-0896a968e7564a35a01cf81ee453ed7e-0.
INFO 03-02 00:01:01 [logger.py:42] Received request cmpl-d6305c8797f3467fb2693f09bb112e48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:01 [async_llm.py:261] Added request cmpl-d6305c8797f3467fb2693f09bb112e48-0.
INFO 03-02 00:01:02 [logger.py:42] Received request cmpl-b1f67ee1304e4769baa1b4be6f640c12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:02 [async_llm.py:261] Added request cmpl-b1f67ee1304e4769baa1b4be6f640c12-0.
INFO 03-02 00:01:03 [logger.py:42] Received request cmpl-2d5d1e8d97634ba1a686439eba42ba51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:03 [async_llm.py:261] Added request cmpl-2d5d1e8d97634ba1a686439eba42ba51-0.
INFO 03-02 00:01:04 [logger.py:42] Received request cmpl-fdd7ed9ebd6d43b2839d94f40a877bbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:04 [async_llm.py:261] Added request cmpl-fdd7ed9ebd6d43b2839d94f40a877bbd-0.
INFO 03-02 00:01:05 [logger.py:42] Received request cmpl-eb150ac64cfd44e794e6fb70cfa3542d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:05 [async_llm.py:261] Added request cmpl-eb150ac64cfd44e794e6fb70cfa3542d-0.
INFO 03-02 00:01:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:06 [logger.py:42] Received request cmpl-3aaef0dac79e49a68f6fc996cb0d1d61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:06 [async_llm.py:261] Added request cmpl-3aaef0dac79e49a68f6fc996cb0d1d61-0.
INFO 03-02 00:01:07 [logger.py:42] Received request cmpl-617a19bacd7649cdbe96960480f8e725-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:07 [async_llm.py:261] Added request cmpl-617a19bacd7649cdbe96960480f8e725-0.
INFO 03-02 00:01:08 [logger.py:42] Received request cmpl-c64a16b1df9e41bc9d12630e064be70b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:08 [async_llm.py:261] Added request cmpl-c64a16b1df9e41bc9d12630e064be70b-0.
INFO 03-02 00:01:09 [logger.py:42] Received request cmpl-84078fb5792b40f693e2cd3477d11de4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:09 [async_llm.py:261] Added request cmpl-84078fb5792b40f693e2cd3477d11de4-0.
INFO 03-02 00:01:11 [logger.py:42] Received request cmpl-dc0bfb8544234b29aad92f778813a052-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:11 [async_llm.py:261] Added request cmpl-dc0bfb8544234b29aad92f778813a052-0.
INFO 03-02 00:01:12 [logger.py:42] Received request cmpl-0a947328b37548a38bb5d10fdc08230e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:12 [async_llm.py:261] Added request cmpl-0a947328b37548a38bb5d10fdc08230e-0.
INFO 03-02 00:01:13 [logger.py:42] Received request cmpl-ad13c4e0a9924b40a0ee308b9fe305c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:13 [async_llm.py:261] Added request cmpl-ad13c4e0a9924b40a0ee308b9fe305c2-0.
INFO 03-02 00:01:14 [logger.py:42] Received request cmpl-878c41367b5148a69d156b2f9c819575-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:14 [async_llm.py:261] Added request cmpl-878c41367b5148a69d156b2f9c819575-0.
INFO 03-02 00:01:15 [logger.py:42] Received request cmpl-7e83793816ef40a6ae04f31a6c1a8504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:15 [async_llm.py:261] Added request cmpl-7e83793816ef40a6ae04f31a6c1a8504-0.
INFO 03-02 00:01:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:16 [logger.py:42] Received request cmpl-0d93ec0837934ee38f1df8c6158dd8a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:16 [async_llm.py:261] Added request cmpl-0d93ec0837934ee38f1df8c6158dd8a7-0.
INFO 03-02 00:01:17 [logger.py:42] Received request cmpl-9f8f99dc31bd41a08c9952f44070d588-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:17 [async_llm.py:261] Added request cmpl-9f8f99dc31bd41a08c9952f44070d588-0.
INFO 03-02 00:01:18 [logger.py:42] Received request cmpl-5d5dd374335e4832bfea5d8da7e28a1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:18 [async_llm.py:261] Added request cmpl-5d5dd374335e4832bfea5d8da7e28a1b-0.
INFO 03-02 00:01:19 [logger.py:42] Received request cmpl-e26d1ebfaec84b6499df25350e222183-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:19 [async_llm.py:261] Added request cmpl-e26d1ebfaec84b6499df25350e222183-0.
INFO 03-02 00:01:20 [logger.py:42] Received request cmpl-7588f0d72efe403da7a42ff7df465b82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:20 [async_llm.py:261] Added request cmpl-7588f0d72efe403da7a42ff7df465b82-0.
INFO 03-02 00:01:22 [logger.py:42] Received request cmpl-6b0d6fe6c475404b884a6e4f92fb8eee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:22 [async_llm.py:261] Added request cmpl-6b0d6fe6c475404b884a6e4f92fb8eee-0.
INFO 03-02 00:01:23 [logger.py:42] Received request cmpl-64b151393155409894e1274e183ec248-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:23 [async_llm.py:261] Added request cmpl-64b151393155409894e1274e183ec248-0.
INFO 03-02 00:01:24 [logger.py:42] Received request cmpl-08f605ed3cb14bf7aca4e9c2e0036b49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:24 [async_llm.py:261] Added request cmpl-08f605ed3cb14bf7aca4e9c2e0036b49-0.
INFO 03-02 00:01:25 [logger.py:42] Received request cmpl-f3a42b9572da4422b60a5c98ff9840d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:25 [async_llm.py:261] Added request cmpl-f3a42b9572da4422b60a5c98ff9840d3-0.
INFO 03-02 00:01:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:26 [logger.py:42] Received request cmpl-222bb141d748425798b36f11e7857b10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:26 [async_llm.py:261] Added request cmpl-222bb141d748425798b36f11e7857b10-0.
INFO 03-02 00:01:27 [logger.py:42] Received request cmpl-5e7e90e2ee8949a88102b19764620d01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:27 [async_llm.py:261] Added request cmpl-5e7e90e2ee8949a88102b19764620d01-0.
INFO 03-02 00:01:28 [logger.py:42] Received request cmpl-df0d2c8390964f84822f6cd3debb1171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:28 [async_llm.py:261] Added request cmpl-df0d2c8390964f84822f6cd3debb1171-0.
INFO 03-02 00:01:29 [logger.py:42] Received request cmpl-d61d8f4eda0d45d1896ee1c1add92feb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:29 [async_llm.py:261] Added request cmpl-d61d8f4eda0d45d1896ee1c1add92feb-0.
INFO 03-02 00:01:30 [logger.py:42] Received request cmpl-a3e3e02dd87546d7ae1a385301b18362-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:30 [async_llm.py:261] Added request cmpl-a3e3e02dd87546d7ae1a385301b18362-0.
INFO 03-02 00:01:31 [logger.py:42] Received request cmpl-933c10703c9e470d9f3fca7fb16592b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:31 [async_llm.py:261] Added request cmpl-933c10703c9e470d9f3fca7fb16592b2-0.
INFO 03-02 00:01:32 [logger.py:42] Received request cmpl-0e67eefd9e464348aebff7693cd2552f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:32 [async_llm.py:261] Added request cmpl-0e67eefd9e464348aebff7693cd2552f-0.
INFO 03-02 00:01:34 [logger.py:42] Received request cmpl-0d9eb32dc3cf481fb71ae9855e795062-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:34 [async_llm.py:261] Added request cmpl-0d9eb32dc3cf481fb71ae9855e795062-0.
INFO 03-02 00:01:35 [logger.py:42] Received request cmpl-5d7b980aba344a4b94b22a267e9f8e57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:35 [async_llm.py:261] Added request cmpl-5d7b980aba344a4b94b22a267e9f8e57-0.
INFO 03-02 00:01:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:36 [logger.py:42] Received request cmpl-5a5c16d49ee54ec3985fd04ada1a9924-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:36 [async_llm.py:261] Added request cmpl-5a5c16d49ee54ec3985fd04ada1a9924-0.
INFO 03-02 00:01:37 [logger.py:42] Received request cmpl-a1010f200ca241dd981e5ad48c7dadc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:37 [async_llm.py:261] Added request cmpl-a1010f200ca241dd981e5ad48c7dadc5-0.
INFO 03-02 00:01:38 [logger.py:42] Received request cmpl-261f87020f994cdbb40b2c74d0422042-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:38 [async_llm.py:261] Added request cmpl-261f87020f994cdbb40b2c74d0422042-0.
INFO 03-02 00:01:39 [logger.py:42] Received request cmpl-8920a90e54734032b16f360e0607f659-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:39 [async_llm.py:261] Added request cmpl-8920a90e54734032b16f360e0607f659-0.
INFO 03-02 00:01:40 [logger.py:42] Received request cmpl-d46f4410b9f14e54b2776a299956c49c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:40 [async_llm.py:261] Added request cmpl-d46f4410b9f14e54b2776a299956c49c-0.
INFO 03-02 00:01:41 [logger.py:42] Received request cmpl-3c1cf9e4b0ea4dda8a04e0e8a70e4b46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:41 [async_llm.py:261] Added request cmpl-3c1cf9e4b0ea4dda8a04e0e8a70e4b46-0.
INFO 03-02 00:01:42 [logger.py:42] Received request cmpl-749a73511d1d4763b07a169f84cb2fa8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:42 [async_llm.py:261] Added request cmpl-749a73511d1d4763b07a169f84cb2fa8-0.
INFO 03-02 00:01:43 [logger.py:42] Received request cmpl-c1b2121a215445199d2d64b3f357cbfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:43 [async_llm.py:261] Added request cmpl-c1b2121a215445199d2d64b3f357cbfe-0.
INFO 03-02 00:01:45 [logger.py:42] Received request cmpl-9cf93339bbad4867842f6eaff087db4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:45 [async_llm.py:261] Added request cmpl-9cf93339bbad4867842f6eaff087db4a-0.
INFO 03-02 00:01:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:46 [logger.py:42] Received request cmpl-b7745ad898d945be948c94a70dc48326-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:46 [async_llm.py:261] Added request cmpl-b7745ad898d945be948c94a70dc48326-0.
INFO 03-02 00:01:47 [logger.py:42] Received request cmpl-3cb2fb913d35477aa4b169266e6d9222-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:47 [async_llm.py:261] Added request cmpl-3cb2fb913d35477aa4b169266e6d9222-0.
INFO 03-02 00:01:48 [logger.py:42] Received request cmpl-72e65241ced54b9586f78ec0b2cd7ab8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:48 [async_llm.py:261] Added request cmpl-72e65241ced54b9586f78ec0b2cd7ab8-0.
INFO 03-02 00:01:49 [logger.py:42] Received request cmpl-9ecc11695844468480bb1db17e78d474-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:49 [async_llm.py:261] Added request cmpl-9ecc11695844468480bb1db17e78d474-0.
INFO 03-02 00:01:50 [logger.py:42] Received request cmpl-12ae13547a16457f8078df15fd8f60dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:50 [async_llm.py:261] Added request cmpl-12ae13547a16457f8078df15fd8f60dc-0.
INFO 03-02 00:01:51 [logger.py:42] Received request cmpl-1981d4bbf4bd41f8b5a5b95a3f730ff2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:51 [async_llm.py:261] Added request cmpl-1981d4bbf4bd41f8b5a5b95a3f730ff2-0.
INFO 03-02 00:01:52 [logger.py:42] Received request cmpl-bc9cd3dedd0f428697d55fc5f34bd229-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:52 [async_llm.py:261] Added request cmpl-bc9cd3dedd0f428697d55fc5f34bd229-0.
INFO 03-02 00:01:53 [logger.py:42] Received request cmpl-2bdd5a5e33914015ab2e58661761499b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:53 [async_llm.py:261] Added request cmpl-2bdd5a5e33914015ab2e58661761499b-0.
INFO 03-02 00:01:54 [logger.py:42] Received request cmpl-2326b22073944d25a3ac8f5734e659bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:54 [async_llm.py:261] Added request cmpl-2326b22073944d25a3ac8f5734e659bc-0.
INFO 03-02 00:01:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:01:55 [logger.py:42] Received request cmpl-4bc71faaf2de410691418829c177ffa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:55 [async_llm.py:261] Added request cmpl-4bc71faaf2de410691418829c177ffa9-0.
INFO 03-02 00:01:57 [logger.py:42] Received request cmpl-47ed5a5df29243c180d7638ea171ac9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:57 [async_llm.py:261] Added request cmpl-47ed5a5df29243c180d7638ea171ac9d-0.
INFO 03-02 00:01:58 [logger.py:42] Received request cmpl-82d8d0de53554e4a8288f735d354de57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:58 [async_llm.py:261] Added request cmpl-82d8d0de53554e4a8288f735d354de57-0.
INFO 03-02 00:01:59 [logger.py:42] Received request cmpl-66c4c0004e124790b4fe79e7c55fdf3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:01:59 [async_llm.py:261] Added request cmpl-66c4c0004e124790b4fe79e7c55fdf3f-0.
INFO 03-02 00:02:00 [logger.py:42] Received request cmpl-8df45991cb6f4395b77c26176286cb25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:00 [async_llm.py:261] Added request cmpl-8df45991cb6f4395b77c26176286cb25-0.
INFO 03-02 00:02:01 [logger.py:42] Received request cmpl-e1a19352a7594eb09b53315d6ed5ebc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:01 [async_llm.py:261] Added request cmpl-e1a19352a7594eb09b53315d6ed5ebc7-0.
INFO 03-02 00:02:02 [logger.py:42] Received request cmpl-faddf8e51f4845f58d72c9b0c113c1f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:02 [async_llm.py:261] Added request cmpl-faddf8e51f4845f58d72c9b0c113c1f9-0.
INFO 03-02 00:02:03 [logger.py:42] Received request cmpl-800c1f8933e9475f8b1c5bd7cec577ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:03 [async_llm.py:261] Added request cmpl-800c1f8933e9475f8b1c5bd7cec577ff-0.
INFO 03-02 00:02:04 [logger.py:42] Received request cmpl-679f1b11e3e246bfa86371842034856b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:04 [async_llm.py:261] Added request cmpl-679f1b11e3e246bfa86371842034856b-0.
INFO 03-02 00:02:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:05 [logger.py:42] Received request cmpl-34f7ce10263649748f5eb143a613f52b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:05 [async_llm.py:261] Added request cmpl-34f7ce10263649748f5eb143a613f52b-0.
INFO 03-02 00:02:06 [logger.py:42] Received request cmpl-fedc9da984ea49d09138b9943e82b3f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:06 [async_llm.py:261] Added request cmpl-fedc9da984ea49d09138b9943e82b3f7-0.
INFO 03-02 00:02:07 [logger.py:42] Received request cmpl-943c451d1e4f4c44945798996126e175-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:07 [async_llm.py:261] Added request cmpl-943c451d1e4f4c44945798996126e175-0.
INFO 03-02 00:02:09 [logger.py:42] Received request cmpl-7c8ad31e86024200a8ccf55e93a6f812-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:09 [async_llm.py:261] Added request cmpl-7c8ad31e86024200a8ccf55e93a6f812-0.
INFO 03-02 00:02:10 [logger.py:42] Received request cmpl-1f20456e457d46e28e17447016489c9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:10 [async_llm.py:261] Added request cmpl-1f20456e457d46e28e17447016489c9f-0.
INFO 03-02 00:02:11 [logger.py:42] Received request cmpl-726962e24e2b470f82f975480704fc8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:11 [async_llm.py:261] Added request cmpl-726962e24e2b470f82f975480704fc8f-0.
INFO 03-02 00:02:12 [logger.py:42] Received request cmpl-97bc7f1780ba4262aa9131b1488a9884-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:12 [async_llm.py:261] Added request cmpl-97bc7f1780ba4262aa9131b1488a9884-0.
INFO 03-02 00:02:13 [logger.py:42] Received request cmpl-743f01a8db794eb290de0c81c63da9c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:13 [async_llm.py:261] Added request cmpl-743f01a8db794eb290de0c81c63da9c5-0.
INFO 03-02 00:02:14 [logger.py:42] Received request cmpl-cbf06ba8c3ec4b87a919566c68de6678-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:14 [async_llm.py:261] Added request cmpl-cbf06ba8c3ec4b87a919566c68de6678-0.
INFO 03-02 00:02:15 [logger.py:42] Received request cmpl-021604f994814ae9bb6c4e20f41f7d7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:15 [async_llm.py:261] Added request cmpl-021604f994814ae9bb6c4e20f41f7d7e-0.
INFO 03-02 00:02:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:16 [logger.py:42] Received request cmpl-1da1f8aa6c0c4745b21bb2aa716c7fca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:16 [async_llm.py:261] Added request cmpl-1da1f8aa6c0c4745b21bb2aa716c7fca-0.
INFO 03-02 00:02:17 [logger.py:42] Received request cmpl-187d37b61c4d48dea82f550ec677ca9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:17 [async_llm.py:261] Added request cmpl-187d37b61c4d48dea82f550ec677ca9e-0.
INFO 03-02 00:02:18 [logger.py:42] Received request cmpl-84505128ec794ca29574949774f2ab08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:18 [async_llm.py:261] Added request cmpl-84505128ec794ca29574949774f2ab08-0.
INFO 03-02 00:02:20 [logger.py:42] Received request cmpl-40a924e8c87146aab3b15dd6e4f0b1a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:20 [async_llm.py:261] Added request cmpl-40a924e8c87146aab3b15dd6e4f0b1a8-0.
INFO 03-02 00:02:21 [logger.py:42] Received request cmpl-f5082cea5c0e468686e170939254af3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:21 [async_llm.py:261] Added request cmpl-f5082cea5c0e468686e170939254af3f-0.
INFO 03-02 00:02:22 [logger.py:42] Received request cmpl-b55f1397354d48bca748c8e489c2c06e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:22 [async_llm.py:261] Added request cmpl-b55f1397354d48bca748c8e489c2c06e-0.
INFO 03-02 00:02:23 [logger.py:42] Received request cmpl-54c4a1db5cee49bbb6c3142ee9a422cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:23 [async_llm.py:261] Added request cmpl-54c4a1db5cee49bbb6c3142ee9a422cb-0.
INFO 03-02 00:02:24 [logger.py:42] Received request cmpl-fd9fe30b7ee64dde850dbd0f76689571-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:24 [async_llm.py:261] Added request cmpl-fd9fe30b7ee64dde850dbd0f76689571-0.
INFO 03-02 00:02:25 [logger.py:42] Received request cmpl-b5aea1a7ad4f4f8ca84ed8e6c3545f78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:25 [async_llm.py:261] Added request cmpl-b5aea1a7ad4f4f8ca84ed8e6c3545f78-0.
INFO 03-02 00:02:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:26 [logger.py:42] Received request cmpl-3ad5dad151fa42d7beb7a842f0edc891-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:26 [async_llm.py:261] Added request cmpl-3ad5dad151fa42d7beb7a842f0edc891-0.
INFO 03-02 00:02:27 [logger.py:42] Received request cmpl-fc33fd880d9c4635bf6d3cdbad0f2559-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:27 [async_llm.py:261] Added request cmpl-fc33fd880d9c4635bf6d3cdbad0f2559-0.
INFO 03-02 00:02:28 [logger.py:42] Received request cmpl-fe83ffbf374240929056fc28a89fb103-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:28 [async_llm.py:261] Added request cmpl-fe83ffbf374240929056fc28a89fb103-0.
INFO 03-02 00:02:29 [logger.py:42] Received request cmpl-cd6ae3739e7d4d9e9ac61c07bc4a42af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:29 [async_llm.py:261] Added request cmpl-cd6ae3739e7d4d9e9ac61c07bc4a42af-0.
INFO 03-02 00:02:30 [logger.py:42] Received request cmpl-6878d3c13fbe4a73a0e17b7f5f15bbc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:30 [async_llm.py:261] Added request cmpl-6878d3c13fbe4a73a0e17b7f5f15bbc5-0.
INFO 03-02 00:02:32 [logger.py:42] Received request cmpl-6fa2f456e3cc49868ef88e7656e65449-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:32 [async_llm.py:261] Added request cmpl-6fa2f456e3cc49868ef88e7656e65449-0.
INFO 03-02 00:02:33 [logger.py:42] Received request cmpl-88af5b826b94455eb4043db5bc48917d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:33 [async_llm.py:261] Added request cmpl-88af5b826b94455eb4043db5bc48917d-0.
INFO 03-02 00:02:34 [logger.py:42] Received request cmpl-16400b93a88849f5b7fd2a26069eb88a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:34 [async_llm.py:261] Added request cmpl-16400b93a88849f5b7fd2a26069eb88a-0.
INFO 03-02 00:02:35 [logger.py:42] Received request cmpl-3e3e1960163b4bc88a283f9a43b6747b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:35 [async_llm.py:261] Added request cmpl-3e3e1960163b4bc88a283f9a43b6747b-0.
INFO 03-02 00:02:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:36 [logger.py:42] Received request cmpl-d2959867fe8c40b8b567a429904e7274-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:36 [async_llm.py:261] Added request cmpl-d2959867fe8c40b8b567a429904e7274-0.
INFO 03-02 00:02:37 [logger.py:42] Received request cmpl-847fc58bc2a945509414c007328092d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:37 [async_llm.py:261] Added request cmpl-847fc58bc2a945509414c007328092d1-0.
INFO 03-02 00:02:38 [logger.py:42] Received request cmpl-3b743d2144e142c7b0b19f56bc23ac11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:38 [async_llm.py:261] Added request cmpl-3b743d2144e142c7b0b19f56bc23ac11-0.
INFO 03-02 00:02:39 [logger.py:42] Received request cmpl-e79004c5458d4e7d91be55dbea75c5a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:39 [async_llm.py:261] Added request cmpl-e79004c5458d4e7d91be55dbea75c5a2-0.
INFO 03-02 00:02:40 [logger.py:42] Received request cmpl-311c9b906b9b48a98503ef59603efd85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:40 [async_llm.py:261] Added request cmpl-311c9b906b9b48a98503ef59603efd85-0.
INFO 03-02 00:02:41 [logger.py:42] Received request cmpl-46581eb018bb4b79b4c15da49e362d9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:41 [async_llm.py:261] Added request cmpl-46581eb018bb4b79b4c15da49e362d9b-0.
INFO 03-02 00:02:42 [logger.py:42] Received request cmpl-fbafacd7c9ad429b89b64972d92d176e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:42 [async_llm.py:261] Added request cmpl-fbafacd7c9ad429b89b64972d92d176e-0.
INFO 03-02 00:02:44 [logger.py:42] Received request cmpl-00b70d9ecf0248d29cf63476fac1ba9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:44 [async_llm.py:261] Added request cmpl-00b70d9ecf0248d29cf63476fac1ba9d-0.
INFO 03-02 00:02:45 [logger.py:42] Received request cmpl-858cd9df5fe044d99ed5a53320a65d5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:45 [async_llm.py:261] Added request cmpl-858cd9df5fe044d99ed5a53320a65d5d-0.
INFO 03-02 00:02:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:46 [logger.py:42] Received request cmpl-5dbedb8b4ab6460d8159775007a70256-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:46 [async_llm.py:261] Added request cmpl-5dbedb8b4ab6460d8159775007a70256-0.
INFO 03-02 00:02:47 [logger.py:42] Received request cmpl-642d347f581c40bd86ab166b17a42d6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:47 [async_llm.py:261] Added request cmpl-642d347f581c40bd86ab166b17a42d6f-0.
INFO 03-02 00:02:48 [logger.py:42] Received request cmpl-cdc8cdaccff24a278c03a85f9f357631-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:48 [async_llm.py:261] Added request cmpl-cdc8cdaccff24a278c03a85f9f357631-0.
INFO 03-02 00:02:49 [logger.py:42] Received request cmpl-ae6d438254d6448ca7eb68c734949d33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:49 [async_llm.py:261] Added request cmpl-ae6d438254d6448ca7eb68c734949d33-0.
INFO 03-02 00:02:50 [logger.py:42] Received request cmpl-86b416df5f0741ca957894b4277ebcb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:50 [async_llm.py:261] Added request cmpl-86b416df5f0741ca957894b4277ebcb9-0.
INFO 03-02 00:02:51 [logger.py:42] Received request cmpl-1faf9a82ec364143b02cdfc8a467febe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:51 [async_llm.py:261] Added request cmpl-1faf9a82ec364143b02cdfc8a467febe-0.
INFO 03-02 00:02:52 [logger.py:42] Received request cmpl-e9f83b33d1874002ba8306cab6b6bc04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:52 [async_llm.py:261] Added request cmpl-e9f83b33d1874002ba8306cab6b6bc04-0.
INFO 03-02 00:02:53 [logger.py:42] Received request cmpl-4fb289dc62c94198b4293906ab5f5b38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:53 [async_llm.py:261] Added request cmpl-4fb289dc62c94198b4293906ab5f5b38-0.
INFO 03-02 00:02:55 [logger.py:42] Received request cmpl-4def0b255817442d997737a68ffdd944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:55 [async_llm.py:261] Added request cmpl-4def0b255817442d997737a68ffdd944-0.
INFO 03-02 00:02:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:02:56 [logger.py:42] Received request cmpl-12dc0dc1323e4186b93158e225133220-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:56 [async_llm.py:261] Added request cmpl-12dc0dc1323e4186b93158e225133220-0.
INFO 03-02 00:02:57 [logger.py:42] Received request cmpl-25875dc2976c4178935918c2acfec9d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:57 [async_llm.py:261] Added request cmpl-25875dc2976c4178935918c2acfec9d0-0.
INFO 03-02 00:02:58 [logger.py:42] Received request cmpl-2fa1814b7c02480c90215df4fbb3a084-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:58 [async_llm.py:261] Added request cmpl-2fa1814b7c02480c90215df4fbb3a084-0.
INFO 03-02 00:02:59 [logger.py:42] Received request cmpl-b4a68278f0224d3ebbeef07749a8c4b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:02:59 [async_llm.py:261] Added request cmpl-b4a68278f0224d3ebbeef07749a8c4b6-0.
INFO 03-02 00:03:00 [logger.py:42] Received request cmpl-310d55719b77455390940b49e2a507a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:00 [async_llm.py:261] Added request cmpl-310d55719b77455390940b49e2a507a4-0.
INFO 03-02 00:03:01 [logger.py:42] Received request cmpl-e674e702d2c842898e2a8d467696d6ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:01 [async_llm.py:261] Added request cmpl-e674e702d2c842898e2a8d467696d6ba-0.
INFO 03-02 00:03:02 [logger.py:42] Received request cmpl-5e85295c5f7841f9b7fa738767423171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:02 [async_llm.py:261] Added request cmpl-5e85295c5f7841f9b7fa738767423171-0.
INFO 03-02 00:03:03 [logger.py:42] Received request cmpl-22974d66aff14311b53e54bf8ffd9b35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:03 [async_llm.py:261] Added request cmpl-22974d66aff14311b53e54bf8ffd9b35-0.
INFO 03-02 00:03:04 [logger.py:42] Received request cmpl-47182e7d6c474502aa5071707ceb8501-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:04 [async_llm.py:261] Added request cmpl-47182e7d6c474502aa5071707ceb8501-0.
INFO 03-02 00:03:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:05 [logger.py:42] Received request cmpl-1db5510c466043288a9f08478a437d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:05 [async_llm.py:261] Added request cmpl-1db5510c466043288a9f08478a437d81-0.
INFO 03-02 00:03:07 [logger.py:42] Received request cmpl-b0e89f3a812d459b8c26b9428d3d974c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:07 [async_llm.py:261] Added request cmpl-b0e89f3a812d459b8c26b9428d3d974c-0.
INFO 03-02 00:03:08 [logger.py:42] Received request cmpl-95d14080f20c4746bee6b07620432460-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:08 [async_llm.py:261] Added request cmpl-95d14080f20c4746bee6b07620432460-0.
INFO 03-02 00:03:09 [logger.py:42] Received request cmpl-b2b70b1096d94f9b8e5d4a49c6a0cd4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:09 [async_llm.py:261] Added request cmpl-b2b70b1096d94f9b8e5d4a49c6a0cd4a-0.
INFO 03-02 00:03:10 [logger.py:42] Received request cmpl-4a3d4b3c45ab4c8597c6b0897fe11f47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:10 [async_llm.py:261] Added request cmpl-4a3d4b3c45ab4c8597c6b0897fe11f47-0.
INFO 03-02 00:03:11 [logger.py:42] Received request cmpl-d87f6b0e9b3048f1890fb506576b975f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:11 [async_llm.py:261] Added request cmpl-d87f6b0e9b3048f1890fb506576b975f-0.
INFO 03-02 00:03:12 [logger.py:42] Received request cmpl-f6f18e3cf0ec40b1895c5b6d6e01f938-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:12 [async_llm.py:261] Added request cmpl-f6f18e3cf0ec40b1895c5b6d6e01f938-0.
INFO 03-02 00:03:13 [logger.py:42] Received request cmpl-4dc7497f296e45b18fa9690dd6a496ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:13 [async_llm.py:261] Added request cmpl-4dc7497f296e45b18fa9690dd6a496ca-0.
INFO 03-02 00:03:14 [logger.py:42] Received request cmpl-5f968e45e13b4758a2f0576856dfbdfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:14 [async_llm.py:261] Added request cmpl-5f968e45e13b4758a2f0576856dfbdfe-0.
INFO 03-02 00:03:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:15 [logger.py:42] Received request cmpl-0236c6752f6a427b97ce1e8a25cd6c9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:15 [async_llm.py:261] Added request cmpl-0236c6752f6a427b97ce1e8a25cd6c9e-0.
INFO 03-02 00:03:16 [logger.py:42] Received request cmpl-c56140ee224e44908e9d07a8fa6dc121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:16 [async_llm.py:261] Added request cmpl-c56140ee224e44908e9d07a8fa6dc121-0.
INFO 03-02 00:03:18 [logger.py:42] Received request cmpl-263e65b0dbb342229ca725531d053710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:18 [async_llm.py:261] Added request cmpl-263e65b0dbb342229ca725531d053710-0.
INFO 03-02 00:03:19 [logger.py:42] Received request cmpl-21ee29a8b47f4a01a5184c4d754c3390-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:19 [async_llm.py:261] Added request cmpl-21ee29a8b47f4a01a5184c4d754c3390-0.
INFO 03-02 00:03:20 [logger.py:42] Received request cmpl-9942dd64854749de8c6d4cd7b9b31bb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:20 [async_llm.py:261] Added request cmpl-9942dd64854749de8c6d4cd7b9b31bb6-0.
INFO 03-02 00:03:21 [logger.py:42] Received request cmpl-ad0deb96608f4be4b0f18ddea2005c81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:21 [async_llm.py:261] Added request cmpl-ad0deb96608f4be4b0f18ddea2005c81-0.
INFO 03-02 00:03:22 [logger.py:42] Received request cmpl-3361147d85534b9b9f62d0fc97b24fba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:22 [async_llm.py:261] Added request cmpl-3361147d85534b9b9f62d0fc97b24fba-0.
INFO 03-02 00:03:23 [logger.py:42] Received request cmpl-40d055bb1f5b41a3a7471d06d9f738cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:23 [async_llm.py:261] Added request cmpl-40d055bb1f5b41a3a7471d06d9f738cf-0.
INFO 03-02 00:03:24 [logger.py:42] Received request cmpl-6b1ef220ac0f4a458efd2d2c357dc194-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:24 [async_llm.py:261] Added request cmpl-6b1ef220ac0f4a458efd2d2c357dc194-0.
INFO 03-02 00:03:25 [logger.py:42] Received request cmpl-d10fb07367e042d1a113d8da2de3bd1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:25 [async_llm.py:261] Added request cmpl-d10fb07367e042d1a113d8da2de3bd1b-0.
INFO 03-02 00:03:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:26 [logger.py:42] Received request cmpl-30ed35e00517460fbc76d6752291e999-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:26 [async_llm.py:261] Added request cmpl-30ed35e00517460fbc76d6752291e999-0.
INFO 03-02 00:03:27 [logger.py:42] Received request cmpl-13b91e1156e8499ab559442288d04b0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:27 [async_llm.py:261] Added request cmpl-13b91e1156e8499ab559442288d04b0d-0.
INFO 03-02 00:03:28 [logger.py:42] Received request cmpl-8a65281ac696486a8530de8231a84cce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:28 [async_llm.py:261] Added request cmpl-8a65281ac696486a8530de8231a84cce-0.
INFO 03-02 00:03:30 [logger.py:42] Received request cmpl-d1384f395d024613a69aa5fc284be78b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:30 [async_llm.py:261] Added request cmpl-d1384f395d024613a69aa5fc284be78b-0.
INFO 03-02 00:03:31 [logger.py:42] Received request cmpl-24d7fd0210a54fb0aad807265f81398a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:31 [async_llm.py:261] Added request cmpl-24d7fd0210a54fb0aad807265f81398a-0.
INFO 03-02 00:03:32 [logger.py:42] Received request cmpl-5999b556db39458baa118fd15b43617e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:32 [async_llm.py:261] Added request cmpl-5999b556db39458baa118fd15b43617e-0.
INFO 03-02 00:03:33 [logger.py:42] Received request cmpl-a7fa11a322c44982b98bd2c10c5d6a71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:33 [async_llm.py:261] Added request cmpl-a7fa11a322c44982b98bd2c10c5d6a71-0.
INFO 03-02 00:03:34 [logger.py:42] Received request cmpl-2627b53b83ca4a34bb2a86c958c2d63b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:34 [async_llm.py:261] Added request cmpl-2627b53b83ca4a34bb2a86c958c2d63b-0.
INFO 03-02 00:03:35 [logger.py:42] Received request cmpl-a1064a890a9844e3ad89397712791cb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:35 [async_llm.py:261] Added request cmpl-a1064a890a9844e3ad89397712791cb8-0.
INFO 03-02 00:03:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:36 [logger.py:42] Received request cmpl-ad1338cdb03b4b56aacfebb6d20c5cb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:36 [async_llm.py:261] Added request cmpl-ad1338cdb03b4b56aacfebb6d20c5cb0-0.
INFO 03-02 00:03:37 [logger.py:42] Received request cmpl-7ae9ea29037f41b98041f309a266a1d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:37 [async_llm.py:261] Added request cmpl-7ae9ea29037f41b98041f309a266a1d9-0.
INFO 03-02 00:03:38 [logger.py:42] Received request cmpl-98000323e19b406bb091c4e13b721193-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:38 [async_llm.py:261] Added request cmpl-98000323e19b406bb091c4e13b721193-0.
INFO 03-02 00:03:39 [logger.py:42] Received request cmpl-5fe4943b1b094855b9952a2c9995605a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:39 [async_llm.py:261] Added request cmpl-5fe4943b1b094855b9952a2c9995605a-0.
INFO 03-02 00:03:41 [logger.py:42] Received request cmpl-d0bdfacbee0443129d2c9e02cfb76f12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:41 [async_llm.py:261] Added request cmpl-d0bdfacbee0443129d2c9e02cfb76f12-0.
INFO 03-02 00:03:42 [logger.py:42] Received request cmpl-a8b337e96c46444a96394d683d27b1b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:42 [async_llm.py:261] Added request cmpl-a8b337e96c46444a96394d683d27b1b4-0.
INFO 03-02 00:03:43 [logger.py:42] Received request cmpl-97c49bfb5b354ce8a69e0ba5bf789a80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:43 [async_llm.py:261] Added request cmpl-97c49bfb5b354ce8a69e0ba5bf789a80-0.
INFO 03-02 00:03:44 [logger.py:42] Received request cmpl-b1a987779a6a436e8a555c776ff47e5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:44 [async_llm.py:261] Added request cmpl-b1a987779a6a436e8a555c776ff47e5f-0.
INFO 03-02 00:03:45 [logger.py:42] Received request cmpl-8549fc2f4def45338107ff8c8df2f18f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:45 [async_llm.py:261] Added request cmpl-8549fc2f4def45338107ff8c8df2f18f-0.
INFO 03-02 00:03:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:46 [logger.py:42] Received request cmpl-8e0f9827dd0c478a96eb3342e440ae99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:46 [async_llm.py:261] Added request cmpl-8e0f9827dd0c478a96eb3342e440ae99-0.
INFO 03-02 00:03:47 [logger.py:42] Received request cmpl-94e9f08a6a554437b72c017edf3c5ee3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:47 [async_llm.py:261] Added request cmpl-94e9f08a6a554437b72c017edf3c5ee3-0.
INFO 03-02 00:03:48 [logger.py:42] Received request cmpl-a3bdea5f9c7445cea104adebc9643e71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:48 [async_llm.py:261] Added request cmpl-a3bdea5f9c7445cea104adebc9643e71-0.
INFO 03-02 00:03:49 [logger.py:42] Received request cmpl-dc7b0c76abab49faa14f5626bcbdd0ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:49 [async_llm.py:261] Added request cmpl-dc7b0c76abab49faa14f5626bcbdd0ce-0.
INFO 03-02 00:03:50 [logger.py:42] Received request cmpl-069b7111404d4c7b811b205672794e34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:50 [async_llm.py:261] Added request cmpl-069b7111404d4c7b811b205672794e34-0.
INFO 03-02 00:03:51 [logger.py:42] Received request cmpl-26280fc7cf15481ca6706a6603159a3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:51 [async_llm.py:261] Added request cmpl-26280fc7cf15481ca6706a6603159a3e-0.
INFO 03-02 00:03:53 [logger.py:42] Received request cmpl-5182bb7e79784089bdb26a45a93720f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:53 [async_llm.py:261] Added request cmpl-5182bb7e79784089bdb26a45a93720f8-0.
INFO 03-02 00:03:54 [logger.py:42] Received request cmpl-9460a84bff504ff1b1857af7045e72f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:54 [async_llm.py:261] Added request cmpl-9460a84bff504ff1b1857af7045e72f5-0.
INFO 03-02 00:03:55 [logger.py:42] Received request cmpl-03973bdb3f5e43dd8f4e8442ee144488-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:55 [async_llm.py:261] Added request cmpl-03973bdb3f5e43dd8f4e8442ee144488-0.
INFO 03-02 00:03:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:03:56 [logger.py:42] Received request cmpl-9f60256a4bfa4f2aa20b3405442bedb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:56 [async_llm.py:261] Added request cmpl-9f60256a4bfa4f2aa20b3405442bedb8-0.
INFO 03-02 00:03:57 [logger.py:42] Received request cmpl-f7a0315f689348649435692c57752aaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:57 [async_llm.py:261] Added request cmpl-f7a0315f689348649435692c57752aaa-0.
INFO 03-02 00:03:58 [logger.py:42] Received request cmpl-67ee181bdb744a5a883ec5616381a108-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:58 [async_llm.py:261] Added request cmpl-67ee181bdb744a5a883ec5616381a108-0.
INFO 03-02 00:03:59 [logger.py:42] Received request cmpl-189e4537d9834c9d9e5751c121021182-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:03:59 [async_llm.py:261] Added request cmpl-189e4537d9834c9d9e5751c121021182-0.
INFO 03-02 00:04:00 [logger.py:42] Received request cmpl-69693c824f2546528928b8fdabebb917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:00 [async_llm.py:261] Added request cmpl-69693c824f2546528928b8fdabebb917-0.
INFO 03-02 00:04:01 [logger.py:42] Received request cmpl-895c1ecabb8643118eed3e81f903851e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:01 [async_llm.py:261] Added request cmpl-895c1ecabb8643118eed3e81f903851e-0.
INFO 03-02 00:04:02 [logger.py:42] Received request cmpl-831de82be2824f8994b97aeac6f36d88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:02 [async_llm.py:261] Added request cmpl-831de82be2824f8994b97aeac6f36d88-0.
INFO 03-02 00:04:04 [logger.py:42] Received request cmpl-2abc92efab4842b3a4196125b0ac2943-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:04 [async_llm.py:261] Added request cmpl-2abc92efab4842b3a4196125b0ac2943-0.
INFO 03-02 00:04:05 [logger.py:42] Received request cmpl-a0d8425f25d84787a0b1e3732a043554-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:05 [async_llm.py:261] Added request cmpl-a0d8425f25d84787a0b1e3732a043554-0.
INFO 03-02 00:04:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:06 [logger.py:42] Received request cmpl-5d20602966cb4ee5aa720a75b0d88eb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:06 [async_llm.py:261] Added request cmpl-5d20602966cb4ee5aa720a75b0d88eb3-0.
INFO 03-02 00:04:07 [logger.py:42] Received request cmpl-86ed34227e0c433abac1e1ecae113549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:07 [async_llm.py:261] Added request cmpl-86ed34227e0c433abac1e1ecae113549-0.
INFO 03-02 00:04:08 [logger.py:42] Received request cmpl-c8ded872ec6145a6b700ee392acf1c18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:08 [async_llm.py:261] Added request cmpl-c8ded872ec6145a6b700ee392acf1c18-0.
INFO 03-02 00:04:09 [logger.py:42] Received request cmpl-fd0a86df15c84e0692ba00e6fba23c17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:09 [async_llm.py:261] Added request cmpl-fd0a86df15c84e0692ba00e6fba23c17-0.
INFO 03-02 00:04:10 [logger.py:42] Received request cmpl-48a5e1aa4848459b8a8b4e061180315b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:10 [async_llm.py:261] Added request cmpl-48a5e1aa4848459b8a8b4e061180315b-0.
INFO 03-02 00:04:11 [logger.py:42] Received request cmpl-6c758b5919dc42cc8130efcac44c85e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:11 [async_llm.py:261] Added request cmpl-6c758b5919dc42cc8130efcac44c85e3-0.
INFO 03-02 00:04:12 [logger.py:42] Received request cmpl-763bc0b223844981b469fbd4a6bbfbbf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:12 [async_llm.py:261] Added request cmpl-763bc0b223844981b469fbd4a6bbfbbf-0.
INFO 03-02 00:04:13 [logger.py:42] Received request cmpl-e93c2b597aae438f931a6c2d38e46c6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:13 [async_llm.py:261] Added request cmpl-e93c2b597aae438f931a6c2d38e46c6b-0.
INFO 03-02 00:04:15 [logger.py:42] Received request cmpl-db415a326c754c8fb1efa9cf37254b87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:15 [async_llm.py:261] Added request cmpl-db415a326c754c8fb1efa9cf37254b87-0.
INFO 03-02 00:04:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:16 [logger.py:42] Received request cmpl-3feff64e26004b3db63675ad0f3133dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:16 [async_llm.py:261] Added request cmpl-3feff64e26004b3db63675ad0f3133dc-0.
INFO 03-02 00:04:17 [logger.py:42] Received request cmpl-ee213e61da52462b9f1908380128f0e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:17 [async_llm.py:261] Added request cmpl-ee213e61da52462b9f1908380128f0e6-0.
INFO 03-02 00:04:18 [logger.py:42] Received request cmpl-2255acf46771489d804cbe770747d203-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:18 [async_llm.py:261] Added request cmpl-2255acf46771489d804cbe770747d203-0.
INFO 03-02 00:04:19 [logger.py:42] Received request cmpl-d2460a61f8b349c284ad09909bf66f88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:19 [async_llm.py:261] Added request cmpl-d2460a61f8b349c284ad09909bf66f88-0.
INFO 03-02 00:04:20 [logger.py:42] Received request cmpl-db69d72c0de242d38c7226cf3630eb7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:20 [async_llm.py:261] Added request cmpl-db69d72c0de242d38c7226cf3630eb7c-0.
INFO 03-02 00:04:21 [logger.py:42] Received request cmpl-4143c8126c2441a5bd8c188ee93aee2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:21 [async_llm.py:261] Added request cmpl-4143c8126c2441a5bd8c188ee93aee2b-0.
INFO 03-02 00:04:22 [logger.py:42] Received request cmpl-b36f351163c44c66802a581af61f7bb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:22 [async_llm.py:261] Added request cmpl-b36f351163c44c66802a581af61f7bb2-0.
INFO 03-02 00:04:23 [logger.py:42] Received request cmpl-32f671129b0643babc6db24f9742d4ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:23 [async_llm.py:261] Added request cmpl-32f671129b0643babc6db24f9742d4ae-0.
INFO 03-02 00:04:24 [logger.py:42] Received request cmpl-c0e7bb3599ed4f3383a0d59d997960ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:24 [async_llm.py:261] Added request cmpl-c0e7bb3599ed4f3383a0d59d997960ac-0.
INFO 03-02 00:04:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:26 [logger.py:42] Received request cmpl-94a95a25b79340d2a5a33d858963e32d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:26 [async_llm.py:261] Added request cmpl-94a95a25b79340d2a5a33d858963e32d-0.
INFO 03-02 00:04:27 [logger.py:42] Received request cmpl-8f7692a704bf46cb9460974164841e25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:27 [async_llm.py:261] Added request cmpl-8f7692a704bf46cb9460974164841e25-0.
INFO 03-02 00:04:28 [logger.py:42] Received request cmpl-4aaffa6eb08243f78425290a018278d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:28 [async_llm.py:261] Added request cmpl-4aaffa6eb08243f78425290a018278d0-0.
INFO 03-02 00:04:29 [logger.py:42] Received request cmpl-baa6e13fb22e4e328aed7bffe3fe3ddb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:29 [async_llm.py:261] Added request cmpl-baa6e13fb22e4e328aed7bffe3fe3ddb-0.
INFO 03-02 00:04:30 [logger.py:42] Received request cmpl-3d934adaf5b14736988b1d13f5cc634c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:30 [async_llm.py:261] Added request cmpl-3d934adaf5b14736988b1d13f5cc634c-0.
INFO 03-02 00:04:31 [logger.py:42] Received request cmpl-6aa355bf84a7413ca67ac6024c8292cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:31 [async_llm.py:261] Added request cmpl-6aa355bf84a7413ca67ac6024c8292cc-0.
INFO 03-02 00:04:32 [logger.py:42] Received request cmpl-f229454274bc43258ccc7635c0e6d598-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:32 [async_llm.py:261] Added request cmpl-f229454274bc43258ccc7635c0e6d598-0.
INFO 03-02 00:04:33 [logger.py:42] Received request cmpl-03e9d9e52c8f480c83fa5026fca58dd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:33 [async_llm.py:261] Added request cmpl-03e9d9e52c8f480c83fa5026fca58dd3-0.
INFO 03-02 00:04:34 [logger.py:42] Received request cmpl-c8b4066f6f234dbfa65c019eaadff760-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:34 [async_llm.py:261] Added request cmpl-c8b4066f6f234dbfa65c019eaadff760-0.
INFO 03-02 00:04:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:35 [logger.py:42] Received request cmpl-a9093d1511c44076961c19fcf97cde3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:35 [async_llm.py:261] Added request cmpl-a9093d1511c44076961c19fcf97cde3f-0.
INFO 03-02 00:04:36 [logger.py:42] Received request cmpl-0282ef0f82174ecdadd757e46ebd36de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:36 [async_llm.py:261] Added request cmpl-0282ef0f82174ecdadd757e46ebd36de-0.
INFO 03-02 00:04:38 [logger.py:42] Received request cmpl-bf68903a4584432eab3470bdd7161863-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:38 [async_llm.py:261] Added request cmpl-bf68903a4584432eab3470bdd7161863-0.
INFO 03-02 00:04:39 [logger.py:42] Received request cmpl-9bd19771adc14af5b4cbf6adebf45096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:39 [async_llm.py:261] Added request cmpl-9bd19771adc14af5b4cbf6adebf45096-0.
INFO 03-02 00:04:40 [logger.py:42] Received request cmpl-cfc1864f13a7499e8413cfe4381e90e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:40 [async_llm.py:261] Added request cmpl-cfc1864f13a7499e8413cfe4381e90e7-0.
INFO 03-02 00:04:41 [logger.py:42] Received request cmpl-3f499f7c51974c78b49e890b2b41be76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:41 [async_llm.py:261] Added request cmpl-3f499f7c51974c78b49e890b2b41be76-0.
INFO 03-02 00:04:42 [logger.py:42] Received request cmpl-8dfb02ea53f14a5997e249cf3f37d1ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:42 [async_llm.py:261] Added request cmpl-8dfb02ea53f14a5997e249cf3f37d1ae-0.
INFO 03-02 00:04:43 [logger.py:42] Received request cmpl-4cb8327fbef44a9288ac315264310ac8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:43 [async_llm.py:261] Added request cmpl-4cb8327fbef44a9288ac315264310ac8-0.
INFO 03-02 00:04:44 [logger.py:42] Received request cmpl-738255636698463eb584451a9d817fdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:44 [async_llm.py:261] Added request cmpl-738255636698463eb584451a9d817fdd-0.
INFO 03-02 00:04:45 [logger.py:42] Received request cmpl-2283440223274c7a96db9ce7617ac424-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:45 [async_llm.py:261] Added request cmpl-2283440223274c7a96db9ce7617ac424-0.
INFO 03-02 00:04:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:46 [logger.py:42] Received request cmpl-574023cf2ac242fdaebd48dc98e10b1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:46 [async_llm.py:261] Added request cmpl-574023cf2ac242fdaebd48dc98e10b1d-0.
INFO 03-02 00:04:47 [logger.py:42] Received request cmpl-53bffc0e065a4f94a0984398107e9e70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:47 [async_llm.py:261] Added request cmpl-53bffc0e065a4f94a0984398107e9e70-0.
INFO 03-02 00:04:48 [logger.py:42] Received request cmpl-dd3a9ffffb794f19a6fd8fcc1907ae71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:48 [async_llm.py:261] Added request cmpl-dd3a9ffffb794f19a6fd8fcc1907ae71-0.
INFO 03-02 00:04:50 [logger.py:42] Received request cmpl-ab9f8db59d2d428eae89b046be9edd00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:50 [async_llm.py:261] Added request cmpl-ab9f8db59d2d428eae89b046be9edd00-0.
INFO 03-02 00:04:51 [logger.py:42] Received request cmpl-69060e837def4d0bb943164a6b170bad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:51 [async_llm.py:261] Added request cmpl-69060e837def4d0bb943164a6b170bad-0.
INFO 03-02 00:04:52 [logger.py:42] Received request cmpl-0e12d10f411d48f6a804b0709c8e23ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:52 [async_llm.py:261] Added request cmpl-0e12d10f411d48f6a804b0709c8e23ee-0.
INFO 03-02 00:04:53 [logger.py:42] Received request cmpl-22a078e658fe4e8f9857a0822297b910-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:53 [async_llm.py:261] Added request cmpl-22a078e658fe4e8f9857a0822297b910-0.
INFO 03-02 00:04:54 [logger.py:42] Received request cmpl-c5586b5519c844538b855ddc684570d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:54 [async_llm.py:261] Added request cmpl-c5586b5519c844538b855ddc684570d3-0.
INFO 03-02 00:04:55 [logger.py:42] Received request cmpl-ade3cbc4d2914c5bbdf7652bf6e11ff3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:55 [async_llm.py:261] Added request cmpl-ade3cbc4d2914c5bbdf7652bf6e11ff3-0.
INFO 03-02 00:04:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:04:56 [logger.py:42] Received request cmpl-e2f8a201ed6c46d2834e80697dd9b542-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:56 [async_llm.py:261] Added request cmpl-e2f8a201ed6c46d2834e80697dd9b542-0.
INFO 03-02 00:04:57 [logger.py:42] Received request cmpl-8c71d6d873b74003ac876ebadad3dae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:57 [async_llm.py:261] Added request cmpl-8c71d6d873b74003ac876ebadad3dae7-0.
INFO 03-02 00:04:58 [logger.py:42] Received request cmpl-565e40b7f7a84ffbb0572e20f0540ff2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:58 [async_llm.py:261] Added request cmpl-565e40b7f7a84ffbb0572e20f0540ff2-0.
INFO 03-02 00:04:59 [logger.py:42] Received request cmpl-9dde51f505cb430c9efcfa2587b9f5bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:04:59 [async_llm.py:261] Added request cmpl-9dde51f505cb430c9efcfa2587b9f5bb-0.
INFO 03-02 00:05:01 [logger.py:42] Received request cmpl-1c97a321412f4334bc66cb190865f292-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:01 [async_llm.py:261] Added request cmpl-1c97a321412f4334bc66cb190865f292-0.
INFO 03-02 00:05:02 [logger.py:42] Received request cmpl-3a1caf3fae9d461bb9ed5af15680853a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:02 [async_llm.py:261] Added request cmpl-3a1caf3fae9d461bb9ed5af15680853a-0.
INFO 03-02 00:05:03 [logger.py:42] Received request cmpl-fa75bd428da949ae90d285adb36bd2da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:03 [async_llm.py:261] Added request cmpl-fa75bd428da949ae90d285adb36bd2da-0.
INFO 03-02 00:05:04 [logger.py:42] Received request cmpl-4e1a816744694e4cbefa94a0f3a06a1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:04 [async_llm.py:261] Added request cmpl-4e1a816744694e4cbefa94a0f3a06a1f-0.
INFO 03-02 00:05:05 [logger.py:42] Received request cmpl-2366a902c3b5417ba39ea2a65429ddcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:05 [async_llm.py:261] Added request cmpl-2366a902c3b5417ba39ea2a65429ddcd-0.
INFO 03-02 00:05:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:06 [logger.py:42] Received request cmpl-efbec4a879dc49d7b01773b29df14a72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:06 [async_llm.py:261] Added request cmpl-efbec4a879dc49d7b01773b29df14a72-0.
INFO 03-02 00:05:07 [logger.py:42] Received request cmpl-93fc3cb90da54a568714dffa036993db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:07 [async_llm.py:261] Added request cmpl-93fc3cb90da54a568714dffa036993db-0.
INFO 03-02 00:05:08 [logger.py:42] Received request cmpl-66f184b7b96a45cd8dc3d3dff45b3137-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:08 [async_llm.py:261] Added request cmpl-66f184b7b96a45cd8dc3d3dff45b3137-0.
INFO 03-02 00:05:09 [logger.py:42] Received request cmpl-d786d1a140b24a328e5f8da171efa5c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:09 [async_llm.py:261] Added request cmpl-d786d1a140b24a328e5f8da171efa5c9-0.
INFO 03-02 00:05:10 [logger.py:42] Received request cmpl-00e8db7b7f314719b8dd89f8d0615ed6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:10 [async_llm.py:261] Added request cmpl-00e8db7b7f314719b8dd89f8d0615ed6-0.
INFO 03-02 00:05:11 [logger.py:42] Received request cmpl-805bfc179c824beba66086ff25c1d1bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:11 [async_llm.py:261] Added request cmpl-805bfc179c824beba66086ff25c1d1bd-0.
INFO 03-02 00:05:13 [logger.py:42] Received request cmpl-0aa364653349482eb475e0064bee3620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:13 [async_llm.py:261] Added request cmpl-0aa364653349482eb475e0064bee3620-0.
INFO 03-02 00:05:14 [logger.py:42] Received request cmpl-f7502de4abe747c5912e02df7e9441c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:14 [async_llm.py:261] Added request cmpl-f7502de4abe747c5912e02df7e9441c6-0.
INFO 03-02 00:05:15 [logger.py:42] Received request cmpl-7ea08013202f46b3b30ee6c3658e42ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:15 [async_llm.py:261] Added request cmpl-7ea08013202f46b3b30ee6c3658e42ae-0.
INFO 03-02 00:05:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:16 [logger.py:42] Received request cmpl-9feacabb5ed4471987f9345c9bb76a95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:16 [async_llm.py:261] Added request cmpl-9feacabb5ed4471987f9345c9bb76a95-0.
INFO 03-02 00:05:17 [logger.py:42] Received request cmpl-6b4f58b98fbd419fa0dfb4438f71be84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:17 [async_llm.py:261] Added request cmpl-6b4f58b98fbd419fa0dfb4438f71be84-0.
INFO 03-02 00:05:18 [logger.py:42] Received request cmpl-e07cf911accb43cd8a177c8492abdb8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:18 [async_llm.py:261] Added request cmpl-e07cf911accb43cd8a177c8492abdb8c-0.
INFO 03-02 00:05:19 [logger.py:42] Received request cmpl-6877e9ca8fa346f3a8357e1cfb4bc6bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:19 [async_llm.py:261] Added request cmpl-6877e9ca8fa346f3a8357e1cfb4bc6bf-0.
INFO 03-02 00:05:20 [logger.py:42] Received request cmpl-53095047f69749df87d5d9baae362c74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:20 [async_llm.py:261] Added request cmpl-53095047f69749df87d5d9baae362c74-0.
INFO 03-02 00:05:21 [logger.py:42] Received request cmpl-05977b65876243a494c4a4916d0b91d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:21 [async_llm.py:261] Added request cmpl-05977b65876243a494c4a4916d0b91d7-0.
INFO 03-02 00:05:22 [logger.py:42] Received request cmpl-6555f0fefda4463e8dfa6f8b4ccc5216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:22 [async_llm.py:261] Added request cmpl-6555f0fefda4463e8dfa6f8b4ccc5216-0.
INFO 03-02 00:05:24 [logger.py:42] Received request cmpl-8661831c492c4ca2884e37e03bd019c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:24 [async_llm.py:261] Added request cmpl-8661831c492c4ca2884e37e03bd019c4-0.
INFO 03-02 00:05:25 [logger.py:42] Received request cmpl-be3e22931a00400383a2b2b08f9e7926-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:25 [async_llm.py:261] Added request cmpl-be3e22931a00400383a2b2b08f9e7926-0.
INFO 03-02 00:05:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:26 [logger.py:42] Received request cmpl-484a862e1bcb4f37a13a826f5b3e3ed7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:26 [async_llm.py:261] Added request cmpl-484a862e1bcb4f37a13a826f5b3e3ed7-0.
INFO 03-02 00:05:27 [logger.py:42] Received request cmpl-7c78a1b3111c4a7ca0bb721485d1ae72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:27 [async_llm.py:261] Added request cmpl-7c78a1b3111c4a7ca0bb721485d1ae72-0.
INFO 03-02 00:05:28 [logger.py:42] Received request cmpl-ae2277435e924335b3b6bd548b8b9305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:28 [async_llm.py:261] Added request cmpl-ae2277435e924335b3b6bd548b8b9305-0.
INFO 03-02 00:05:29 [logger.py:42] Received request cmpl-d61b6e0dc366401d8403ca9f449abfd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:29 [async_llm.py:261] Added request cmpl-d61b6e0dc366401d8403ca9f449abfd5-0.
INFO 03-02 00:05:30 [logger.py:42] Received request cmpl-2c71d9c125204edbb42ba4cee77db7c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:30 [async_llm.py:261] Added request cmpl-2c71d9c125204edbb42ba4cee77db7c5-0.
INFO 03-02 00:05:31 [logger.py:42] Received request cmpl-a0dced95664348a9b1fd7c35d033f74d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:31 [async_llm.py:261] Added request cmpl-a0dced95664348a9b1fd7c35d033f74d-0.
INFO 03-02 00:05:32 [logger.py:42] Received request cmpl-da0919b4e09749be8cb65a76b7597706-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:32 [async_llm.py:261] Added request cmpl-da0919b4e09749be8cb65a76b7597706-0.
INFO 03-02 00:05:33 [logger.py:42] Received request cmpl-497d15aeb0c1483f9a5c5251ac0447a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:33 [async_llm.py:261] Added request cmpl-497d15aeb0c1483f9a5c5251ac0447a7-0.
INFO 03-02 00:05:34 [logger.py:42] Received request cmpl-ca6277d0963745bb8afaa06a33f58a6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:34 [async_llm.py:261] Added request cmpl-ca6277d0963745bb8afaa06a33f58a6e-0.
INFO 03-02 00:05:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:36 [logger.py:42] Received request cmpl-e44cb94467524516b22f33d7730269b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:36 [async_llm.py:261] Added request cmpl-e44cb94467524516b22f33d7730269b1-0.
INFO 03-02 00:05:37 [logger.py:42] Received request cmpl-9137fd8188b14d8ba134c150d23e97cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:37 [async_llm.py:261] Added request cmpl-9137fd8188b14d8ba134c150d23e97cb-0.
INFO 03-02 00:05:38 [logger.py:42] Received request cmpl-ece54e5d72d540c2ae776e6af518bfc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:38 [async_llm.py:261] Added request cmpl-ece54e5d72d540c2ae776e6af518bfc3-0.
INFO 03-02 00:05:39 [logger.py:42] Received request cmpl-715f90dcd7fc4609b81c5d526f5e89ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:39 [async_llm.py:261] Added request cmpl-715f90dcd7fc4609b81c5d526f5e89ae-0.
INFO 03-02 00:05:40 [logger.py:42] Received request cmpl-55f658ae73e740898ad17c0a6d9cb5ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:40 [async_llm.py:261] Added request cmpl-55f658ae73e740898ad17c0a6d9cb5ce-0.
INFO 03-02 00:05:41 [logger.py:42] Received request cmpl-72aee124385949a0b30dd28f1ebb04b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:41 [async_llm.py:261] Added request cmpl-72aee124385949a0b30dd28f1ebb04b7-0.
INFO 03-02 00:05:42 [logger.py:42] Received request cmpl-a2c202a6222d4a85a0d6b9a37a65bebd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:42 [async_llm.py:261] Added request cmpl-a2c202a6222d4a85a0d6b9a37a65bebd-0.
INFO 03-02 00:05:43 [logger.py:42] Received request cmpl-d78c4dc0449d4ebbb3cda23b4cb29f30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:43 [async_llm.py:261] Added request cmpl-d78c4dc0449d4ebbb3cda23b4cb29f30-0.
INFO 03-02 00:05:44 [logger.py:42] Received request cmpl-6c9c0e3c4dfb43c8b992ad2e1e716014-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:44 [async_llm.py:261] Added request cmpl-6c9c0e3c4dfb43c8b992ad2e1e716014-0.
INFO 03-02 00:05:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:45 [logger.py:42] Received request cmpl-d9ccd3c17d224ae991204c7cdd14e906-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:45 [async_llm.py:261] Added request cmpl-d9ccd3c17d224ae991204c7cdd14e906-0.
INFO 03-02 00:05:47 [logger.py:42] Received request cmpl-f6cd4c319dcd44e39e3a599832aa7983-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:47 [async_llm.py:261] Added request cmpl-f6cd4c319dcd44e39e3a599832aa7983-0.
INFO 03-02 00:05:48 [logger.py:42] Received request cmpl-1a3773226afb49cd819bc3d05eb3ec5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:48 [async_llm.py:261] Added request cmpl-1a3773226afb49cd819bc3d05eb3ec5a-0.
INFO 03-02 00:05:49 [logger.py:42] Received request cmpl-0725df8dc635405ebbf77c20c049fb96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:49 [async_llm.py:261] Added request cmpl-0725df8dc635405ebbf77c20c049fb96-0.
INFO 03-02 00:05:50 [logger.py:42] Received request cmpl-2f0e375cf9a649d3a2c36b901092c000-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:50 [async_llm.py:261] Added request cmpl-2f0e375cf9a649d3a2c36b901092c000-0.
INFO 03-02 00:05:51 [logger.py:42] Received request cmpl-13c2ede7f74d4a779f1fee229840e356-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:51 [async_llm.py:261] Added request cmpl-13c2ede7f74d4a779f1fee229840e356-0.
INFO 03-02 00:05:52 [logger.py:42] Received request cmpl-b7c21fa845274fa0809722221d6d8347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:52 [async_llm.py:261] Added request cmpl-b7c21fa845274fa0809722221d6d8347-0.
INFO 03-02 00:05:53 [logger.py:42] Received request cmpl-caf143b8a9a746d6841165a7280c84c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:53 [async_llm.py:261] Added request cmpl-caf143b8a9a746d6841165a7280c84c1-0.
INFO 03-02 00:05:54 [logger.py:42] Received request cmpl-583ca21fd16a4bf38f78c385dc61d754-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:54 [async_llm.py:261] Added request cmpl-583ca21fd16a4bf38f78c385dc61d754-0.
INFO 03-02 00:05:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:05:55 [logger.py:42] Received request cmpl-a4e9a0ff530e477c941e1afe68b22450-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:55 [async_llm.py:261] Added request cmpl-a4e9a0ff530e477c941e1afe68b22450-0.
INFO 03-02 00:05:56 [logger.py:42] Received request cmpl-07d1d84d35ae4a04b0b0cee2c3bd6f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:56 [async_llm.py:261] Added request cmpl-07d1d84d35ae4a04b0b0cee2c3bd6f64-0.
INFO 03-02 00:05:57 [logger.py:42] Received request cmpl-f067694b646a48bfbc53c38b442055d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:57 [async_llm.py:261] Added request cmpl-f067694b646a48bfbc53c38b442055d2-0.
INFO 03-02 00:05:59 [logger.py:42] Received request cmpl-9958ea1f3029436f9a2dcbae1acbab1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:05:59 [async_llm.py:261] Added request cmpl-9958ea1f3029436f9a2dcbae1acbab1e-0.
INFO 03-02 00:06:00 [logger.py:42] Received request cmpl-98957bbc5cea49039bc59a5b71806e73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:00 [async_llm.py:261] Added request cmpl-98957bbc5cea49039bc59a5b71806e73-0.
INFO 03-02 00:06:01 [logger.py:42] Received request cmpl-dc9db095e14944feb3f1092a075a8a7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:01 [async_llm.py:261] Added request cmpl-dc9db095e14944feb3f1092a075a8a7c-0.
INFO 03-02 00:06:02 [logger.py:42] Received request cmpl-4c07e81b1e034d4e8d7ee2b669630e7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:02 [async_llm.py:261] Added request cmpl-4c07e81b1e034d4e8d7ee2b669630e7c-0.
INFO 03-02 00:06:03 [logger.py:42] Received request cmpl-a4eccb38ca2a46dab508d879f719b886-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:03 [async_llm.py:261] Added request cmpl-a4eccb38ca2a46dab508d879f719b886-0.
INFO 03-02 00:06:04 [logger.py:42] Received request cmpl-abb576310a1e4dddbb6cfe6030fb732d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:04 [async_llm.py:261] Added request cmpl-abb576310a1e4dddbb6cfe6030fb732d-0.
INFO 03-02 00:06:05 [logger.py:42] Received request cmpl-d3304538b3224ae99c8dbd09fed3e207-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:05 [async_llm.py:261] Added request cmpl-d3304538b3224ae99c8dbd09fed3e207-0.
INFO 03-02 00:06:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:06 [logger.py:42] Received request cmpl-83d280468a0049c5ae1f351684b52417-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:06 [async_llm.py:261] Added request cmpl-83d280468a0049c5ae1f351684b52417-0.
INFO 03-02 00:06:07 [logger.py:42] Received request cmpl-2eaec2101ec14ce481abcbe466a843bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:07 [async_llm.py:261] Added request cmpl-2eaec2101ec14ce481abcbe466a843bb-0.
INFO 03-02 00:06:08 [logger.py:42] Received request cmpl-85785cbbda71423d99b3b2bc1ffab4cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:08 [async_llm.py:261] Added request cmpl-85785cbbda71423d99b3b2bc1ffab4cf-0.
INFO 03-02 00:06:09 [logger.py:42] Received request cmpl-a906be249fdc46f1a28925d1729b5af8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:09 [async_llm.py:261] Added request cmpl-a906be249fdc46f1a28925d1729b5af8-0.
INFO 03-02 00:06:11 [logger.py:42] Received request cmpl-6223ca547eda44a4b4ce99f89560ffba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:11 [async_llm.py:261] Added request cmpl-6223ca547eda44a4b4ce99f89560ffba-0.
INFO 03-02 00:06:12 [logger.py:42] Received request cmpl-35aef13eac7f4346ab1b241527c61375-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:12 [async_llm.py:261] Added request cmpl-35aef13eac7f4346ab1b241527c61375-0.
INFO 03-02 00:06:13 [logger.py:42] Received request cmpl-100a867e282c4da2ab89f93d854d505b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:13 [async_llm.py:261] Added request cmpl-100a867e282c4da2ab89f93d854d505b-0.
INFO 03-02 00:06:14 [logger.py:42] Received request cmpl-2fc469a03d264fdb8b2d6c4ce0b4f12f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:14 [async_llm.py:261] Added request cmpl-2fc469a03d264fdb8b2d6c4ce0b4f12f-0.
INFO 03-02 00:06:15 [logger.py:42] Received request cmpl-568bc65d09604a69bb510f03175b74af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:15 [async_llm.py:261] Added request cmpl-568bc65d09604a69bb510f03175b74af-0.
INFO 03-02 00:06:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:16 [logger.py:42] Received request cmpl-f0bfea5a93484804b9f5a6aedb4fc539-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:16 [async_llm.py:261] Added request cmpl-f0bfea5a93484804b9f5a6aedb4fc539-0.
INFO 03-02 00:06:17 [logger.py:42] Received request cmpl-4b71b8faff1d4005aa639dfa680df472-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:17 [async_llm.py:261] Added request cmpl-4b71b8faff1d4005aa639dfa680df472-0.
INFO 03-02 00:06:18 [logger.py:42] Received request cmpl-d3a54edfe9804ea1a8757cac6d2b88cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:18 [async_llm.py:261] Added request cmpl-d3a54edfe9804ea1a8757cac6d2b88cd-0.
INFO 03-02 00:06:19 [logger.py:42] Received request cmpl-dce92537d7be4245809dcc12e798e191-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:19 [async_llm.py:261] Added request cmpl-dce92537d7be4245809dcc12e798e191-0.
INFO 03-02 00:06:20 [logger.py:42] Received request cmpl-2d45a98510dc46d5b59712791a8ec08d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:20 [async_llm.py:261] Added request cmpl-2d45a98510dc46d5b59712791a8ec08d-0.
INFO 03-02 00:06:22 [logger.py:42] Received request cmpl-1814ece8094a489ea07b55d9b1123799-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:22 [async_llm.py:261] Added request cmpl-1814ece8094a489ea07b55d9b1123799-0.
INFO 03-02 00:06:23 [logger.py:42] Received request cmpl-537c9a462d674a5fa1700d378dd3b639-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:23 [async_llm.py:261] Added request cmpl-537c9a462d674a5fa1700d378dd3b639-0.
INFO 03-02 00:06:24 [logger.py:42] Received request cmpl-071f6a1f44584593984c914312bdcfa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:24 [async_llm.py:261] Added request cmpl-071f6a1f44584593984c914312bdcfa6-0.
INFO 03-02 00:06:25 [logger.py:42] Received request cmpl-cf64ae218fc447d19095ed3236fde0be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:25 [async_llm.py:261] Added request cmpl-cf64ae218fc447d19095ed3236fde0be-0.
INFO 03-02 00:06:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:26 [logger.py:42] Received request cmpl-765adeaee5024d1aaf68ca28c141f042-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:26 [async_llm.py:261] Added request cmpl-765adeaee5024d1aaf68ca28c141f042-0.
INFO 03-02 00:06:27 [logger.py:42] Received request cmpl-43f9e8e9b2c24f95b5b7419f3760d10a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:27 [async_llm.py:261] Added request cmpl-43f9e8e9b2c24f95b5b7419f3760d10a-0.
INFO 03-02 00:06:28 [logger.py:42] Received request cmpl-537bd98629dd4f83a2e278ba9155eb6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:28 [async_llm.py:261] Added request cmpl-537bd98629dd4f83a2e278ba9155eb6f-0.
INFO 03-02 00:06:29 [logger.py:42] Received request cmpl-bc15284ff373461c89bd2eda2d49f676-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:29 [async_llm.py:261] Added request cmpl-bc15284ff373461c89bd2eda2d49f676-0.
INFO 03-02 00:06:30 [logger.py:42] Received request cmpl-cf1c474e922f45f5a1937b1786c76645-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:30 [async_llm.py:261] Added request cmpl-cf1c474e922f45f5a1937b1786c76645-0.
INFO 03-02 00:06:31 [logger.py:42] Received request cmpl-4561a8cb208d400e8da69df4be47a5e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:31 [async_llm.py:261] Added request cmpl-4561a8cb208d400e8da69df4be47a5e2-0.
INFO 03-02 00:06:32 [logger.py:42] Received request cmpl-97e673065ef0456d8b1a5d7222c66c52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:32 [async_llm.py:261] Added request cmpl-97e673065ef0456d8b1a5d7222c66c52-0.
INFO 03-02 00:06:34 [logger.py:42] Received request cmpl-f02963c6d3d6412999281aab9c8fd507-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:34 [async_llm.py:261] Added request cmpl-f02963c6d3d6412999281aab9c8fd507-0.
INFO 03-02 00:06:35 [logger.py:42] Received request cmpl-25c75d8e54744ee6b5309a02d018770c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:35 [async_llm.py:261] Added request cmpl-25c75d8e54744ee6b5309a02d018770c-0.
INFO 03-02 00:06:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:36 [logger.py:42] Received request cmpl-64d13b6439334cc0afd7b3c7e32be8d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:36 [async_llm.py:261] Added request cmpl-64d13b6439334cc0afd7b3c7e32be8d1-0.
INFO 03-02 00:06:37 [logger.py:42] Received request cmpl-101bc74c1a1b421eaedcafef78aa75de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:37 [async_llm.py:261] Added request cmpl-101bc74c1a1b421eaedcafef78aa75de-0.
INFO 03-02 00:06:38 [logger.py:42] Received request cmpl-3708f026644149ac9545767631807cc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:38 [async_llm.py:261] Added request cmpl-3708f026644149ac9545767631807cc1-0.
INFO 03-02 00:06:39 [logger.py:42] Received request cmpl-6c539779ab174a7bbb2213714bec7929-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:39 [async_llm.py:261] Added request cmpl-6c539779ab174a7bbb2213714bec7929-0.
INFO 03-02 00:06:40 [logger.py:42] Received request cmpl-e528e444720f487f855a1aeb3b3795f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:40 [async_llm.py:261] Added request cmpl-e528e444720f487f855a1aeb3b3795f0-0.
INFO 03-02 00:06:41 [logger.py:42] Received request cmpl-4aab13a7eda8414f9a15f84bf317cf42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:41 [async_llm.py:261] Added request cmpl-4aab13a7eda8414f9a15f84bf317cf42-0.
INFO 03-02 00:06:42 [logger.py:42] Received request cmpl-362941f5941442269db87f2eec86bae6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:42 [async_llm.py:261] Added request cmpl-362941f5941442269db87f2eec86bae6-0.
INFO 03-02 00:06:43 [logger.py:42] Received request cmpl-cafa84bedfb74ff7981c36d99c84c760-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:43 [async_llm.py:261] Added request cmpl-cafa84bedfb74ff7981c36d99c84c760-0.
INFO 03-02 00:06:45 [logger.py:42] Received request cmpl-96615878597f47a099705bc0ab3a341b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:45 [async_llm.py:261] Added request cmpl-96615878597f47a099705bc0ab3a341b-0.
INFO 03-02 00:06:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:46 [logger.py:42] Received request cmpl-bfc0fcc1630b4adb86c9b126c00f5426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:46 [async_llm.py:261] Added request cmpl-bfc0fcc1630b4adb86c9b126c00f5426-0.
INFO 03-02 00:06:47 [logger.py:42] Received request cmpl-5d4e8b0343214662b34aa829191a0bba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:47 [async_llm.py:261] Added request cmpl-5d4e8b0343214662b34aa829191a0bba-0.
INFO 03-02 00:06:48 [logger.py:42] Received request cmpl-a893bcc0ad8e4836b8aae8591aa493bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:48 [async_llm.py:261] Added request cmpl-a893bcc0ad8e4836b8aae8591aa493bd-0.
INFO 03-02 00:06:49 [logger.py:42] Received request cmpl-fd37ace6ba1c4204b186ce8b08e8cdc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:49 [async_llm.py:261] Added request cmpl-fd37ace6ba1c4204b186ce8b08e8cdc2-0.
INFO 03-02 00:06:50 [logger.py:42] Received request cmpl-e7330c88fb094872a9336d0c38c7bead-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:50 [async_llm.py:261] Added request cmpl-e7330c88fb094872a9336d0c38c7bead-0.
INFO 03-02 00:06:51 [logger.py:42] Received request cmpl-9941c54309e349e7a009d21f1c00d2fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:51 [async_llm.py:261] Added request cmpl-9941c54309e349e7a009d21f1c00d2fa-0.
INFO 03-02 00:06:52 [logger.py:42] Received request cmpl-536381cddd5c46a2a74718671545acf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:52 [async_llm.py:261] Added request cmpl-536381cddd5c46a2a74718671545acf8-0.
INFO 03-02 00:06:53 [logger.py:42] Received request cmpl-a01f89b232364ee087b3eacdb55960b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:53 [async_llm.py:261] Added request cmpl-a01f89b232364ee087b3eacdb55960b0-0.
INFO 03-02 00:06:54 [logger.py:42] Received request cmpl-4972f0b373d84aa9b1ac8d38182b3cd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:54 [async_llm.py:261] Added request cmpl-4972f0b373d84aa9b1ac8d38182b3cd1-0.
INFO 03-02 00:06:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:06:56 [logger.py:42] Received request cmpl-fc5c5c111c224f4eb139413eb0cff209-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:56 [async_llm.py:261] Added request cmpl-fc5c5c111c224f4eb139413eb0cff209-0.
INFO 03-02 00:06:57 [logger.py:42] Received request cmpl-a3ee3d3c1d684c48bf073f45ba8577a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:57 [async_llm.py:261] Added request cmpl-a3ee3d3c1d684c48bf073f45ba8577a9-0.
INFO 03-02 00:06:58 [logger.py:42] Received request cmpl-233e8ed1de9b41e48547932673462228-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:58 [async_llm.py:261] Added request cmpl-233e8ed1de9b41e48547932673462228-0.
INFO 03-02 00:06:59 [logger.py:42] Received request cmpl-a191a63094e445de8e42e6ff4b7dd6da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:06:59 [async_llm.py:261] Added request cmpl-a191a63094e445de8e42e6ff4b7dd6da-0.
INFO 03-02 00:07:00 [logger.py:42] Received request cmpl-fe3a68c3a5d84d0e93bd804899263b63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:00 [async_llm.py:261] Added request cmpl-fe3a68c3a5d84d0e93bd804899263b63-0.
INFO 03-02 00:07:01 [logger.py:42] Received request cmpl-48d90203ed174f568ba8e93751fc2ef7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:01 [async_llm.py:261] Added request cmpl-48d90203ed174f568ba8e93751fc2ef7-0.
INFO 03-02 00:07:02 [logger.py:42] Received request cmpl-43a1934a38e34713907c24fe3ba14077-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:02 [async_llm.py:261] Added request cmpl-43a1934a38e34713907c24fe3ba14077-0.
INFO 03-02 00:07:03 [logger.py:42] Received request cmpl-68ba703ff0d74186a7807f52fec218f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:03 [async_llm.py:261] Added request cmpl-68ba703ff0d74186a7807f52fec218f7-0.
INFO 03-02 00:07:04 [logger.py:42] Received request cmpl-2640d9e029694656a1904c12ad3c615d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:04 [async_llm.py:261] Added request cmpl-2640d9e029694656a1904c12ad3c615d-0.
INFO 03-02 00:07:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:05 [logger.py:42] Received request cmpl-4bfdacdcac9a43f5a173ba9f6b686c68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:05 [async_llm.py:261] Added request cmpl-4bfdacdcac9a43f5a173ba9f6b686c68-0.
INFO 03-02 00:07:06 [logger.py:42] Received request cmpl-a131cfed054f412682d559be286c5046-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:06 [async_llm.py:261] Added request cmpl-a131cfed054f412682d559be286c5046-0.
INFO 03-02 00:07:08 [logger.py:42] Received request cmpl-93b014b8afba464db5e06e842db25951-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:08 [async_llm.py:261] Added request cmpl-93b014b8afba464db5e06e842db25951-0.
INFO 03-02 00:07:09 [logger.py:42] Received request cmpl-9b20c201b486453dbbc3695d598723a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:09 [async_llm.py:261] Added request cmpl-9b20c201b486453dbbc3695d598723a9-0.
INFO 03-02 00:07:10 [logger.py:42] Received request cmpl-90537dcedec9468d861c71133ed17617-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:10 [async_llm.py:261] Added request cmpl-90537dcedec9468d861c71133ed17617-0.
INFO 03-02 00:07:11 [logger.py:42] Received request cmpl-99091916be8347e9a84cf43be99fa970-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:11 [async_llm.py:261] Added request cmpl-99091916be8347e9a84cf43be99fa970-0.
INFO 03-02 00:07:12 [logger.py:42] Received request cmpl-f69212c5d0124538b13e8cf46ae04359-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:12 [async_llm.py:261] Added request cmpl-f69212c5d0124538b13e8cf46ae04359-0.
INFO 03-02 00:07:13 [logger.py:42] Received request cmpl-5a266d7caca8450aaf5b4258d36b70af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:13 [async_llm.py:261] Added request cmpl-5a266d7caca8450aaf5b4258d36b70af-0.
INFO 03-02 00:07:14 [logger.py:42] Received request cmpl-7b192134e6e34a9b8193b7a5cf61a2c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:14 [async_llm.py:261] Added request cmpl-7b192134e6e34a9b8193b7a5cf61a2c2-0.
INFO 03-02 00:07:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:15 [logger.py:42] Received request cmpl-507ef389d0d54a63b0ba9c7883f2d41e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:15 [async_llm.py:261] Added request cmpl-507ef389d0d54a63b0ba9c7883f2d41e-0.
INFO 03-02 00:07:16 [logger.py:42] Received request cmpl-3c3f54b0ce8e4a369df212194e7aaee8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:16 [async_llm.py:261] Added request cmpl-3c3f54b0ce8e4a369df212194e7aaee8-0.
INFO 03-02 00:07:17 [logger.py:42] Received request cmpl-d1724b00fddf4a0597ec7e15f2ba01f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:17 [async_llm.py:261] Added request cmpl-d1724b00fddf4a0597ec7e15f2ba01f8-0.
INFO 03-02 00:07:19 [logger.py:42] Received request cmpl-b02d93cdec484852bf3ac2f68f60cc5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:19 [async_llm.py:261] Added request cmpl-b02d93cdec484852bf3ac2f68f60cc5e-0.
INFO 03-02 00:07:20 [logger.py:42] Received request cmpl-d00da450d37b4e3b95d01c07bd72d869-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:20 [async_llm.py:261] Added request cmpl-d00da450d37b4e3b95d01c07bd72d869-0.
INFO 03-02 00:07:21 [logger.py:42] Received request cmpl-b16ac29c4478432e9ddeb3790dbd83f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:21 [async_llm.py:261] Added request cmpl-b16ac29c4478432e9ddeb3790dbd83f9-0.
INFO 03-02 00:07:22 [logger.py:42] Received request cmpl-1f891f7cb7bb4a37a742fb2e5a1269ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:22 [async_llm.py:261] Added request cmpl-1f891f7cb7bb4a37a742fb2e5a1269ff-0.
INFO 03-02 00:07:23 [logger.py:42] Received request cmpl-e3c0f50e866d400085648249f07a454d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:23 [async_llm.py:261] Added request cmpl-e3c0f50e866d400085648249f07a454d-0.
INFO 03-02 00:07:24 [logger.py:42] Received request cmpl-087c5f29c952464e8ad9406223d9a745-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:24 [async_llm.py:261] Added request cmpl-087c5f29c952464e8ad9406223d9a745-0.
INFO 03-02 00:07:25 [logger.py:42] Received request cmpl-bf7064b8fadd43c9a281484db56b7016-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:25 [async_llm.py:261] Added request cmpl-bf7064b8fadd43c9a281484db56b7016-0.
INFO 03-02 00:07:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:26 [logger.py:42] Received request cmpl-e4fcc072ebef4b27a36930125bf2876f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:26 [async_llm.py:261] Added request cmpl-e4fcc072ebef4b27a36930125bf2876f-0.
INFO 03-02 00:07:27 [logger.py:42] Received request cmpl-888d61e506564794ac023ba55b924088-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:27 [async_llm.py:261] Added request cmpl-888d61e506564794ac023ba55b924088-0.
INFO 03-02 00:07:28 [logger.py:42] Received request cmpl-e546f6961418458099f6c724a63d0091-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:28 [async_llm.py:261] Added request cmpl-e546f6961418458099f6c724a63d0091-0.
INFO 03-02 00:07:29 [logger.py:42] Received request cmpl-1cd0b133ed4a4b119eab919eebc3a94a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:29 [async_llm.py:261] Added request cmpl-1cd0b133ed4a4b119eab919eebc3a94a-0.
INFO 03-02 00:07:31 [logger.py:42] Received request cmpl-e0c84be5e6bf438cb75c5e96ba86355a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:31 [async_llm.py:261] Added request cmpl-e0c84be5e6bf438cb75c5e96ba86355a-0.
INFO 03-02 00:07:32 [logger.py:42] Received request cmpl-94dd99aee3694274830a4484ce33695c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:32 [async_llm.py:261] Added request cmpl-94dd99aee3694274830a4484ce33695c-0.
INFO 03-02 00:07:33 [logger.py:42] Received request cmpl-0639cc6289b643cea1be78dd7bb41eec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:33 [async_llm.py:261] Added request cmpl-0639cc6289b643cea1be78dd7bb41eec-0.
INFO 03-02 00:07:34 [logger.py:42] Received request cmpl-248805714f4e43a6873a1dc4dab94e20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:34 [async_llm.py:261] Added request cmpl-248805714f4e43a6873a1dc4dab94e20-0.
INFO 03-02 00:07:35 [logger.py:42] Received request cmpl-c9d05decbbd84834871c8712b6e3db35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:35 [async_llm.py:261] Added request cmpl-c9d05decbbd84834871c8712b6e3db35-0.
INFO 03-02 00:07:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:36 [logger.py:42] Received request cmpl-541cddcb32294ad9b399b20cbc32ffb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:36 [async_llm.py:261] Added request cmpl-541cddcb32294ad9b399b20cbc32ffb3-0.
INFO 03-02 00:07:37 [logger.py:42] Received request cmpl-153ec43ffe24495db4344830544918e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:37 [async_llm.py:261] Added request cmpl-153ec43ffe24495db4344830544918e6-0.
INFO 03-02 00:07:38 [logger.py:42] Received request cmpl-32135deff6f646bfbbb885f1aa923ec3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:38 [async_llm.py:261] Added request cmpl-32135deff6f646bfbbb885f1aa923ec3-0.
INFO 03-02 00:07:39 [logger.py:42] Received request cmpl-aec72983d28741b68ecc5dd6aeef62de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:39 [async_llm.py:261] Added request cmpl-aec72983d28741b68ecc5dd6aeef62de-0.
INFO 03-02 00:07:40 [logger.py:42] Received request cmpl-31c44765465d44e0864282cc8c6a0895-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:40 [async_llm.py:261] Added request cmpl-31c44765465d44e0864282cc8c6a0895-0.
INFO 03-02 00:07:42 [logger.py:42] Received request cmpl-5f52a78b90b647b9a572b22921f842f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:42 [async_llm.py:261] Added request cmpl-5f52a78b90b647b9a572b22921f842f5-0.
INFO 03-02 00:07:43 [logger.py:42] Received request cmpl-b330099924344c599bb3d13a588a8f78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:43 [async_llm.py:261] Added request cmpl-b330099924344c599bb3d13a588a8f78-0.
INFO 03-02 00:07:44 [logger.py:42] Received request cmpl-fb0fbe4c423c4d8f9c74170c4cf88025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:44 [async_llm.py:261] Added request cmpl-fb0fbe4c423c4d8f9c74170c4cf88025-0.
INFO 03-02 00:07:45 [logger.py:42] Received request cmpl-5fc42a0a6655435d8a5ede3b1de1e984-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:45 [async_llm.py:261] Added request cmpl-5fc42a0a6655435d8a5ede3b1de1e984-0.
INFO 03-02 00:07:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:46 [logger.py:42] Received request cmpl-a4005ea0c6204116a38f899123c8c9f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:46 [async_llm.py:261] Added request cmpl-a4005ea0c6204116a38f899123c8c9f7-0.
INFO 03-02 00:07:47 [logger.py:42] Received request cmpl-f5dfa354cb0c44f58d5e716f1ca84241-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:47 [async_llm.py:261] Added request cmpl-f5dfa354cb0c44f58d5e716f1ca84241-0.
INFO 03-02 00:07:48 [logger.py:42] Received request cmpl-63df353d85774f13a4d6cb5346b450cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:48 [async_llm.py:261] Added request cmpl-63df353d85774f13a4d6cb5346b450cb-0.
INFO 03-02 00:07:49 [logger.py:42] Received request cmpl-64d3f88613c545b0b1217dc0d3870bbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:49 [async_llm.py:261] Added request cmpl-64d3f88613c545b0b1217dc0d3870bbe-0.
INFO 03-02 00:07:50 [logger.py:42] Received request cmpl-9393d9a26bd041be90a1857525455a29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:50 [async_llm.py:261] Added request cmpl-9393d9a26bd041be90a1857525455a29-0.
INFO 03-02 00:07:51 [logger.py:42] Received request cmpl-f4e2321a2fb743a6bd3156fabf735329-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:51 [async_llm.py:261] Added request cmpl-f4e2321a2fb743a6bd3156fabf735329-0.
INFO 03-02 00:07:52 [logger.py:42] Received request cmpl-0efdd001747440188f9c2b7397b50f33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:52 [async_llm.py:261] Added request cmpl-0efdd001747440188f9c2b7397b50f33-0.
INFO 03-02 00:07:54 [logger.py:42] Received request cmpl-6dcfcea4180542d4af13b9351765000f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:54 [async_llm.py:261] Added request cmpl-6dcfcea4180542d4af13b9351765000f-0.
INFO 03-02 00:07:55 [logger.py:42] Received request cmpl-194d3043072e403ab945e062b81b0166-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:55 [async_llm.py:261] Added request cmpl-194d3043072e403ab945e062b81b0166-0.
INFO 03-02 00:07:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:07:56 [logger.py:42] Received request cmpl-b692c606b9c64243a3f1ca468bbe53ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:56 [async_llm.py:261] Added request cmpl-b692c606b9c64243a3f1ca468bbe53ed-0.
INFO 03-02 00:07:57 [logger.py:42] Received request cmpl-acdd195d5d504ce68df99c5eaf887586-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:57 [async_llm.py:261] Added request cmpl-acdd195d5d504ce68df99c5eaf887586-0.
INFO 03-02 00:07:58 [logger.py:42] Received request cmpl-eda56913f54d4203ba4caacf4b83b1cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:58 [async_llm.py:261] Added request cmpl-eda56913f54d4203ba4caacf4b83b1cd-0.
INFO 03-02 00:07:59 [logger.py:42] Received request cmpl-849afe1e2e1443aa83f836310bdcea56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:07:59 [async_llm.py:261] Added request cmpl-849afe1e2e1443aa83f836310bdcea56-0.
INFO 03-02 00:08:00 [logger.py:42] Received request cmpl-df4f0b4ecd3f4f6a82bc6384391d6988-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:00 [async_llm.py:261] Added request cmpl-df4f0b4ecd3f4f6a82bc6384391d6988-0.
INFO 03-02 00:08:01 [logger.py:42] Received request cmpl-6fdea28c9a0a4e0695334ed3dccedddc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:01 [async_llm.py:261] Added request cmpl-6fdea28c9a0a4e0695334ed3dccedddc-0.
INFO 03-02 00:08:02 [logger.py:42] Received request cmpl-9ece0ed0d2234ae296372131633b31fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:02 [async_llm.py:261] Added request cmpl-9ece0ed0d2234ae296372131633b31fb-0.
INFO 03-02 00:08:03 [logger.py:42] Received request cmpl-fee89a40ef834cf58a437c1102d95893-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:03 [async_llm.py:261] Added request cmpl-fee89a40ef834cf58a437c1102d95893-0.
INFO 03-02 00:08:05 [logger.py:42] Received request cmpl-1ed1154b485542fdb1dc064dce66bc9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:05 [async_llm.py:261] Added request cmpl-1ed1154b485542fdb1dc064dce66bc9f-0.
INFO 03-02 00:08:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:06 [logger.py:42] Received request cmpl-9980db8ea28e494d8a2ec1925305b083-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:06 [async_llm.py:261] Added request cmpl-9980db8ea28e494d8a2ec1925305b083-0.
INFO 03-02 00:08:07 [logger.py:42] Received request cmpl-f17e50eaab4744f9b1c7fc849a1141a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:07 [async_llm.py:261] Added request cmpl-f17e50eaab4744f9b1c7fc849a1141a5-0.
INFO 03-02 00:08:08 [logger.py:42] Received request cmpl-28c4b424164745908d3e7864eb8b553b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:08 [async_llm.py:261] Added request cmpl-28c4b424164745908d3e7864eb8b553b-0.
INFO 03-02 00:08:09 [logger.py:42] Received request cmpl-41607cdfd407427185d25ccdfadf2290-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:09 [async_llm.py:261] Added request cmpl-41607cdfd407427185d25ccdfadf2290-0.
INFO 03-02 00:08:10 [logger.py:42] Received request cmpl-2b5172daacd148a29ea96181429d3355-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:10 [async_llm.py:261] Added request cmpl-2b5172daacd148a29ea96181429d3355-0.
INFO 03-02 00:08:11 [logger.py:42] Received request cmpl-a71c4ee4d1384a25bf4e05e7f3243981-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:11 [async_llm.py:261] Added request cmpl-a71c4ee4d1384a25bf4e05e7f3243981-0.
INFO 03-02 00:08:12 [logger.py:42] Received request cmpl-62024d9b3a0e410093323fd90673b64a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:12 [async_llm.py:261] Added request cmpl-62024d9b3a0e410093323fd90673b64a-0.
INFO 03-02 00:08:13 [logger.py:42] Received request cmpl-cc23a9516c8a45eeb1868c74a4cd6f7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:13 [async_llm.py:261] Added request cmpl-cc23a9516c8a45eeb1868c74a4cd6f7b-0.
INFO 03-02 00:08:14 [logger.py:42] Received request cmpl-a14762ed082f49d68c684f00f331e64c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:14 [async_llm.py:261] Added request cmpl-a14762ed082f49d68c684f00f331e64c-0.
INFO 03-02 00:08:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:15 [logger.py:42] Received request cmpl-effd62d5f92741bbac7c86cfaab53c60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:15 [async_llm.py:261] Added request cmpl-effd62d5f92741bbac7c86cfaab53c60-0.
INFO 03-02 00:08:17 [logger.py:42] Received request cmpl-ccbdbd7235434acc95419451860b1c0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:17 [async_llm.py:261] Added request cmpl-ccbdbd7235434acc95419451860b1c0a-0.
INFO 03-02 00:08:18 [logger.py:42] Received request cmpl-9ebb232e36084420ac90307bfd7b5fef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:18 [async_llm.py:261] Added request cmpl-9ebb232e36084420ac90307bfd7b5fef-0.
INFO 03-02 00:08:19 [logger.py:42] Received request cmpl-3ec1c277b1b4450baaac68516331bc47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:19 [async_llm.py:261] Added request cmpl-3ec1c277b1b4450baaac68516331bc47-0.
INFO 03-02 00:08:20 [logger.py:42] Received request cmpl-e107c14ac0034442b3859bd34587ee72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:20 [async_llm.py:261] Added request cmpl-e107c14ac0034442b3859bd34587ee72-0.
INFO 03-02 00:08:21 [logger.py:42] Received request cmpl-c1b909f940104edeaa90d0c7e317d177-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:21 [async_llm.py:261] Added request cmpl-c1b909f940104edeaa90d0c7e317d177-0.
INFO 03-02 00:08:22 [logger.py:42] Received request cmpl-3ae0f225caf94f529c91cd5e1ec234fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:22 [async_llm.py:261] Added request cmpl-3ae0f225caf94f529c91cd5e1ec234fb-0.
INFO 03-02 00:08:23 [logger.py:42] Received request cmpl-bd9de50c37cb478ba2c761ba0cfc077c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:23 [async_llm.py:261] Added request cmpl-bd9de50c37cb478ba2c761ba0cfc077c-0.
INFO 03-02 00:08:24 [logger.py:42] Received request cmpl-16e7b6b035e9425ea5902490b2e4b60e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:24 [async_llm.py:261] Added request cmpl-16e7b6b035e9425ea5902490b2e4b60e-0.
INFO 03-02 00:08:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:25 [logger.py:42] Received request cmpl-7f39a89c5fe8478fb8602b4087a03b36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:25 [async_llm.py:261] Added request cmpl-7f39a89c5fe8478fb8602b4087a03b36-0.
INFO 03-02 00:08:26 [logger.py:42] Received request cmpl-d5ba6b47eb4d4b9aa8c9209d01465f3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:26 [async_llm.py:261] Added request cmpl-d5ba6b47eb4d4b9aa8c9209d01465f3a-0.
INFO 03-02 00:08:28 [logger.py:42] Received request cmpl-20073a32d0a74736aa0e1700d272d721-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:28 [async_llm.py:261] Added request cmpl-20073a32d0a74736aa0e1700d272d721-0.
INFO 03-02 00:08:29 [logger.py:42] Received request cmpl-d1e2dd90afad43e0a11d92187aa1eae1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:29 [async_llm.py:261] Added request cmpl-d1e2dd90afad43e0a11d92187aa1eae1-0.
INFO 03-02 00:08:30 [logger.py:42] Received request cmpl-b838de0cb9894952a7f54b80307d0171-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:30 [async_llm.py:261] Added request cmpl-b838de0cb9894952a7f54b80307d0171-0.
INFO 03-02 00:08:31 [logger.py:42] Received request cmpl-98ca168f93444c32a4a13d9a90a2bd18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:31 [async_llm.py:261] Added request cmpl-98ca168f93444c32a4a13d9a90a2bd18-0.
INFO 03-02 00:08:32 [logger.py:42] Received request cmpl-5318ebbfb8604cf2ac6bb30fc82d33a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:32 [async_llm.py:261] Added request cmpl-5318ebbfb8604cf2ac6bb30fc82d33a4-0.
INFO 03-02 00:08:33 [logger.py:42] Received request cmpl-ccb07ac9ff834e0d8c53ade6e74f9ba4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:33 [async_llm.py:261] Added request cmpl-ccb07ac9ff834e0d8c53ade6e74f9ba4-0.
INFO 03-02 00:08:34 [logger.py:42] Received request cmpl-6ba66c8c94ab4c4988a7c8473e100cf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:34 [async_llm.py:261] Added request cmpl-6ba66c8c94ab4c4988a7c8473e100cf0-0.
INFO 03-02 00:08:35 [logger.py:42] Received request cmpl-68f10954e90640ceb3c495f7007854f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:35 [async_llm.py:261] Added request cmpl-68f10954e90640ceb3c495f7007854f3-0.
INFO 03-02 00:08:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:36 [logger.py:42] Received request cmpl-303743fd943b4eba8076889b01da902c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:36 [async_llm.py:261] Added request cmpl-303743fd943b4eba8076889b01da902c-0.
INFO 03-02 00:08:37 [logger.py:42] Received request cmpl-180908892ef54c68bc3b054941997a7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:37 [async_llm.py:261] Added request cmpl-180908892ef54c68bc3b054941997a7d-0.
INFO 03-02 00:08:38 [logger.py:42] Received request cmpl-3ec2d4de81c6478f9b1b41d23f14c978-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:38 [async_llm.py:261] Added request cmpl-3ec2d4de81c6478f9b1b41d23f14c978-0.
INFO 03-02 00:08:40 [logger.py:42] Received request cmpl-1b88a75a0bab4484ab7faa8125c5437b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:40 [async_llm.py:261] Added request cmpl-1b88a75a0bab4484ab7faa8125c5437b-0.
INFO 03-02 00:08:41 [logger.py:42] Received request cmpl-b965fd6858d34864b8e990eba3eb7487-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:41 [async_llm.py:261] Added request cmpl-b965fd6858d34864b8e990eba3eb7487-0.
INFO 03-02 00:08:42 [logger.py:42] Received request cmpl-ce61b6dac6c84dbd8f752e5ee278f43b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:42 [async_llm.py:261] Added request cmpl-ce61b6dac6c84dbd8f752e5ee278f43b-0.
INFO 03-02 00:08:43 [logger.py:42] Received request cmpl-2592a84787f64b11b752329da03d0818-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:43 [async_llm.py:261] Added request cmpl-2592a84787f64b11b752329da03d0818-0.
INFO 03-02 00:08:44 [logger.py:42] Received request cmpl-2d80ae63ec8b4be581886a0b5112d151-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:44 [async_llm.py:261] Added request cmpl-2d80ae63ec8b4be581886a0b5112d151-0.
INFO 03-02 00:08:45 [logger.py:42] Received request cmpl-b1a50b1879d94686a8463782fb1fc2e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:45 [async_llm.py:261] Added request cmpl-b1a50b1879d94686a8463782fb1fc2e4-0.
INFO 03-02 00:08:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:46 [logger.py:42] Received request cmpl-7337b288f7044771a98c3083168e57c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:46 [async_llm.py:261] Added request cmpl-7337b288f7044771a98c3083168e57c8-0.
INFO 03-02 00:08:47 [logger.py:42] Received request cmpl-8e7c4d246df84d388d3d444cd8d50c8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:47 [async_llm.py:261] Added request cmpl-8e7c4d246df84d388d3d444cd8d50c8a-0.
INFO 03-02 00:08:48 [logger.py:42] Received request cmpl-a8c67fc5edd349dfb9b7d77837790f59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:48 [async_llm.py:261] Added request cmpl-a8c67fc5edd349dfb9b7d77837790f59-0.
INFO 03-02 00:08:49 [logger.py:42] Received request cmpl-08330e514c7546d6a5ed6cfd77d7ef52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:49 [async_llm.py:261] Added request cmpl-08330e514c7546d6a5ed6cfd77d7ef52-0.
INFO 03-02 00:08:51 [logger.py:42] Received request cmpl-e4b9f1f9fe764886b51c3b55a78970a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:51 [async_llm.py:261] Added request cmpl-e4b9f1f9fe764886b51c3b55a78970a6-0.
INFO 03-02 00:08:52 [logger.py:42] Received request cmpl-30930b826caf432793b4f093722a224e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:52 [async_llm.py:261] Added request cmpl-30930b826caf432793b4f093722a224e-0.
INFO 03-02 00:08:53 [logger.py:42] Received request cmpl-54f036c355bf49d9915b2016973f169a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:53 [async_llm.py:261] Added request cmpl-54f036c355bf49d9915b2016973f169a-0.
INFO 03-02 00:08:54 [logger.py:42] Received request cmpl-bfd3136dae4c486f9a3f4e855e412a3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:54 [async_llm.py:261] Added request cmpl-bfd3136dae4c486f9a3f4e855e412a3d-0.
INFO 03-02 00:08:55 [logger.py:42] Received request cmpl-bd52ff350b44448080a0eb0b8a85b1ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:55 [async_llm.py:261] Added request cmpl-bd52ff350b44448080a0eb0b8a85b1ab-0.
INFO 03-02 00:08:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:08:56 [logger.py:42] Received request cmpl-e8910097d90349b48ae6976430d8ce7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:56 [async_llm.py:261] Added request cmpl-e8910097d90349b48ae6976430d8ce7a-0.
INFO 03-02 00:08:57 [logger.py:42] Received request cmpl-326903dbb05f4058bd8d2a6db53a4400-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:57 [async_llm.py:261] Added request cmpl-326903dbb05f4058bd8d2a6db53a4400-0.
INFO 03-02 00:08:58 [logger.py:42] Received request cmpl-7c6f9fd7fe014cc589e4824894bd1b09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:58 [async_llm.py:261] Added request cmpl-7c6f9fd7fe014cc589e4824894bd1b09-0.
INFO 03-02 00:08:59 [logger.py:42] Received request cmpl-dba243d79d1e4543a4388b18cc377227-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:08:59 [async_llm.py:261] Added request cmpl-dba243d79d1e4543a4388b18cc377227-0.
INFO 03-02 00:09:00 [logger.py:42] Received request cmpl-fa16fcedf99e46c887d0f8710f2821cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:00 [async_llm.py:261] Added request cmpl-fa16fcedf99e46c887d0f8710f2821cb-0.
INFO 03-02 00:09:01 [logger.py:42] Received request cmpl-578f1a9b0880431e966e31f14be850a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:01 [async_llm.py:261] Added request cmpl-578f1a9b0880431e966e31f14be850a0-0.
INFO 03-02 00:09:03 [logger.py:42] Received request cmpl-f03ce669b35d44a9bcca2adf2964efbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:03 [async_llm.py:261] Added request cmpl-f03ce669b35d44a9bcca2adf2964efbd-0.
INFO 03-02 00:09:04 [logger.py:42] Received request cmpl-eec158836f3544d69657e4118f381a21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:04 [async_llm.py:261] Added request cmpl-eec158836f3544d69657e4118f381a21-0.
INFO 03-02 00:09:05 [logger.py:42] Received request cmpl-0bff42f9ac5e4f1bb038482a6c07440a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:05 [async_llm.py:261] Added request cmpl-0bff42f9ac5e4f1bb038482a6c07440a-0.
INFO 03-02 00:09:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:06 [logger.py:42] Received request cmpl-1b9ff4cebfad4dd99c3e03c639058da6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:06 [async_llm.py:261] Added request cmpl-1b9ff4cebfad4dd99c3e03c639058da6-0.
INFO 03-02 00:09:07 [logger.py:42] Received request cmpl-7644b5777fbe436faced36432b5ce1e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:07 [async_llm.py:261] Added request cmpl-7644b5777fbe436faced36432b5ce1e4-0.
INFO 03-02 00:09:08 [logger.py:42] Received request cmpl-7f71944e1a0b47bc9f9aee4c5da72cc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:08 [async_llm.py:261] Added request cmpl-7f71944e1a0b47bc9f9aee4c5da72cc7-0.
INFO 03-02 00:09:09 [logger.py:42] Received request cmpl-991d5df8d9f44f91a88e53f8834c5fd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:09 [async_llm.py:261] Added request cmpl-991d5df8d9f44f91a88e53f8834c5fd3-0.
INFO 03-02 00:09:10 [logger.py:42] Received request cmpl-03125a7c2ecc41a4a06a38e30b1cf2bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:10 [async_llm.py:261] Added request cmpl-03125a7c2ecc41a4a06a38e30b1cf2bc-0.
INFO 03-02 00:09:11 [logger.py:42] Received request cmpl-e901cbea998f419db71fe9dc2334398b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:11 [async_llm.py:261] Added request cmpl-e901cbea998f419db71fe9dc2334398b-0.
INFO 03-02 00:09:12 [logger.py:42] Received request cmpl-590880bcac6e48c8b100f1521e37ee70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:12 [async_llm.py:261] Added request cmpl-590880bcac6e48c8b100f1521e37ee70-0.
INFO 03-02 00:09:13 [logger.py:42] Received request cmpl-6441d170e6ca4742847a85c9ca767c3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:13 [async_llm.py:261] Added request cmpl-6441d170e6ca4742847a85c9ca767c3e-0.
INFO 03-02 00:09:15 [logger.py:42] Received request cmpl-ffc8dbb6c330491fa61ff70db9f72e4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:15 [async_llm.py:261] Added request cmpl-ffc8dbb6c330491fa61ff70db9f72e4d-0.
INFO 03-02 00:09:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:16 [logger.py:42] Received request cmpl-308c8db8342d4215950a6d834d883506-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:16 [async_llm.py:261] Added request cmpl-308c8db8342d4215950a6d834d883506-0.
INFO 03-02 00:09:17 [logger.py:42] Received request cmpl-2d6b6631835149748210ad1c3bbfa905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:17 [async_llm.py:261] Added request cmpl-2d6b6631835149748210ad1c3bbfa905-0.
INFO 03-02 00:09:18 [logger.py:42] Received request cmpl-d5933f11dff04b3b99840266c74f1917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:18 [async_llm.py:261] Added request cmpl-d5933f11dff04b3b99840266c74f1917-0.
INFO 03-02 00:09:19 [logger.py:42] Received request cmpl-74af968d78694c8cbdaa69d7eca975ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:19 [async_llm.py:261] Added request cmpl-74af968d78694c8cbdaa69d7eca975ce-0.
INFO 03-02 00:09:20 [logger.py:42] Received request cmpl-e51749fbbdeb4a9aa7b9fa84bae933c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:20 [async_llm.py:261] Added request cmpl-e51749fbbdeb4a9aa7b9fa84bae933c5-0.
INFO 03-02 00:09:21 [logger.py:42] Received request cmpl-96d95b1094b84acf99889a3d271d10bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:21 [async_llm.py:261] Added request cmpl-96d95b1094b84acf99889a3d271d10bc-0.
INFO 03-02 00:09:22 [logger.py:42] Received request cmpl-8cca1bcb325f42dbbf58c772e85b48bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:22 [async_llm.py:261] Added request cmpl-8cca1bcb325f42dbbf58c772e85b48bf-0.
INFO 03-02 00:09:23 [logger.py:42] Received request cmpl-b5a5c4b29870483a80a24f9e7482a838-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:23 [async_llm.py:261] Added request cmpl-b5a5c4b29870483a80a24f9e7482a838-0.
INFO 03-02 00:09:24 [logger.py:42] Received request cmpl-666ef4831911470ea22c765c0556060a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:24 [async_llm.py:261] Added request cmpl-666ef4831911470ea22c765c0556060a-0.
INFO 03-02 00:09:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:26 [logger.py:42] Received request cmpl-b674aa895a294692b225cef96213db14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:26 [async_llm.py:261] Added request cmpl-b674aa895a294692b225cef96213db14-0.
INFO 03-02 00:09:27 [logger.py:42] Received request cmpl-805d841bb2974d08a9ad640d933431f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:27 [async_llm.py:261] Added request cmpl-805d841bb2974d08a9ad640d933431f4-0.
INFO 03-02 00:09:28 [logger.py:42] Received request cmpl-7bcee25edf18487183f95944cb65b9c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:28 [async_llm.py:261] Added request cmpl-7bcee25edf18487183f95944cb65b9c9-0.
INFO 03-02 00:09:29 [logger.py:42] Received request cmpl-ae04ddb4a70c44b9bdf38f134f0be817-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:29 [async_llm.py:261] Added request cmpl-ae04ddb4a70c44b9bdf38f134f0be817-0.
INFO 03-02 00:09:30 [logger.py:42] Received request cmpl-5eb440097e884d2bba8a5a9f49bee3d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:30 [async_llm.py:261] Added request cmpl-5eb440097e884d2bba8a5a9f49bee3d7-0.
INFO 03-02 00:09:31 [logger.py:42] Received request cmpl-4c159b7e273d4876a40ee746b6228701-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:31 [async_llm.py:261] Added request cmpl-4c159b7e273d4876a40ee746b6228701-0.
INFO 03-02 00:09:32 [logger.py:42] Received request cmpl-4b2ee500e71347ea842a85b91eaba25b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:32 [async_llm.py:261] Added request cmpl-4b2ee500e71347ea842a85b91eaba25b-0.
INFO 03-02 00:09:33 [logger.py:42] Received request cmpl-17fe0ca8e23f4d54929cf2751e670726-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:33 [async_llm.py:261] Added request cmpl-17fe0ca8e23f4d54929cf2751e670726-0.
INFO 03-02 00:09:34 [logger.py:42] Received request cmpl-2f786eb876404e218ff55a550477660e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:34 [async_llm.py:261] Added request cmpl-2f786eb876404e218ff55a550477660e-0.
INFO 03-02 00:09:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:35 [logger.py:42] Received request cmpl-b42d69c8813d445b8290fa22042e150b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:35 [async_llm.py:261] Added request cmpl-b42d69c8813d445b8290fa22042e150b-0.
INFO 03-02 00:09:36 [logger.py:42] Received request cmpl-727f031e2cc34460ab12e6e7ceb3c3cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:36 [async_llm.py:261] Added request cmpl-727f031e2cc34460ab12e6e7ceb3c3cd-0.
INFO 03-02 00:09:38 [logger.py:42] Received request cmpl-a463d16189334fa0a1705d3d3a154266-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:38 [async_llm.py:261] Added request cmpl-a463d16189334fa0a1705d3d3a154266-0.
INFO 03-02 00:09:39 [logger.py:42] Received request cmpl-fa2d43b18cd748acb22317380c37ceb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:39 [async_llm.py:261] Added request cmpl-fa2d43b18cd748acb22317380c37ceb9-0.
INFO 03-02 00:09:40 [logger.py:42] Received request cmpl-134baabe2be544188bf13ad0d6d49e33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:40 [async_llm.py:261] Added request cmpl-134baabe2be544188bf13ad0d6d49e33-0.
INFO 03-02 00:09:41 [logger.py:42] Received request cmpl-64d3ee89049240728e511650e01e7f55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:41 [async_llm.py:261] Added request cmpl-64d3ee89049240728e511650e01e7f55-0.
INFO 03-02 00:09:42 [logger.py:42] Received request cmpl-4f0d9ed0360a4b6e81888920d6a58e3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:42 [async_llm.py:261] Added request cmpl-4f0d9ed0360a4b6e81888920d6a58e3a-0.
INFO 03-02 00:09:43 [logger.py:42] Received request cmpl-56ff0f1d9ce84f968cab3489048aa757-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:43 [async_llm.py:261] Added request cmpl-56ff0f1d9ce84f968cab3489048aa757-0.
INFO 03-02 00:09:44 [logger.py:42] Received request cmpl-89bcb65d69c348198a99568c933d1925-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:44 [async_llm.py:261] Added request cmpl-89bcb65d69c348198a99568c933d1925-0.
INFO 03-02 00:09:45 [logger.py:42] Received request cmpl-bac0f99f54304890b2a5299d403971d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:45 [async_llm.py:261] Added request cmpl-bac0f99f54304890b2a5299d403971d0-0.
INFO 03-02 00:09:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:46 [logger.py:42] Received request cmpl-5ef17bf2d0d144c3bd3e69ff54281494-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:46 [async_llm.py:261] Added request cmpl-5ef17bf2d0d144c3bd3e69ff54281494-0.
INFO 03-02 00:09:47 [logger.py:42] Received request cmpl-450976f2b9b74d9db28b3349f15e98a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:47 [async_llm.py:261] Added request cmpl-450976f2b9b74d9db28b3349f15e98a9-0.
INFO 03-02 00:09:49 [logger.py:42] Received request cmpl-4953671577d34e68a5d9ecc06bf5e3b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:49 [async_llm.py:261] Added request cmpl-4953671577d34e68a5d9ecc06bf5e3b8-0.
INFO 03-02 00:09:50 [logger.py:42] Received request cmpl-f23ca57a02e44c26b52565545a45463a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:50 [async_llm.py:261] Added request cmpl-f23ca57a02e44c26b52565545a45463a-0.
INFO 03-02 00:09:51 [logger.py:42] Received request cmpl-e2cf103fe7964ee1bf620e9f4b3b534b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:51 [async_llm.py:261] Added request cmpl-e2cf103fe7964ee1bf620e9f4b3b534b-0.
INFO 03-02 00:09:52 [logger.py:42] Received request cmpl-f199e670485949fc91f9e897f5d942e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:52 [async_llm.py:261] Added request cmpl-f199e670485949fc91f9e897f5d942e7-0.
INFO 03-02 00:09:53 [logger.py:42] Received request cmpl-b10d0399edcf4356901fa21aa779fa83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:53 [async_llm.py:261] Added request cmpl-b10d0399edcf4356901fa21aa779fa83-0.
INFO 03-02 00:09:54 [logger.py:42] Received request cmpl-fce1405e116248a4b8e223665bd646f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:54 [async_llm.py:261] Added request cmpl-fce1405e116248a4b8e223665bd646f7-0.
INFO 03-02 00:09:55 [logger.py:42] Received request cmpl-d69c24f44b504a1c8d45d3900f6f20b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:55 [async_llm.py:261] Added request cmpl-d69c24f44b504a1c8d45d3900f6f20b5-0.
INFO 03-02 00:09:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:09:56 [logger.py:42] Received request cmpl-d498e58f00bc4442976f86976279ffd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:56 [async_llm.py:261] Added request cmpl-d498e58f00bc4442976f86976279ffd0-0.
INFO 03-02 00:09:57 [logger.py:42] Received request cmpl-1ce7d2af0f104ab497132db459f21f18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:57 [async_llm.py:261] Added request cmpl-1ce7d2af0f104ab497132db459f21f18-0.
INFO 03-02 00:09:58 [logger.py:42] Received request cmpl-5692d2ea1e7f4d02bda84df4fd70b0ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:58 [async_llm.py:261] Added request cmpl-5692d2ea1e7f4d02bda84df4fd70b0ef-0.
INFO 03-02 00:09:59 [logger.py:42] Received request cmpl-3152cd7c523b4acc82fe7178964bf67d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:09:59 [async_llm.py:261] Added request cmpl-3152cd7c523b4acc82fe7178964bf67d-0.
INFO 03-02 00:10:01 [logger.py:42] Received request cmpl-81fb1c029a634f00a6b24d07b0e53196-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:01 [async_llm.py:261] Added request cmpl-81fb1c029a634f00a6b24d07b0e53196-0.
INFO 03-02 00:10:02 [logger.py:42] Received request cmpl-ebd0029207a948729af379e8159afb46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:02 [async_llm.py:261] Added request cmpl-ebd0029207a948729af379e8159afb46-0.
INFO 03-02 00:10:03 [logger.py:42] Received request cmpl-fe5cacc34d454c4eb288ba1fbd318685-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:03 [async_llm.py:261] Added request cmpl-fe5cacc34d454c4eb288ba1fbd318685-0.
INFO 03-02 00:10:04 [logger.py:42] Received request cmpl-b47f1d34884448f68c5860449d42541d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:04 [async_llm.py:261] Added request cmpl-b47f1d34884448f68c5860449d42541d-0.
INFO 03-02 00:10:05 [logger.py:42] Received request cmpl-8a94af131d174d1c82e6c554a76d181e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:05 [async_llm.py:261] Added request cmpl-8a94af131d174d1c82e6c554a76d181e-0.
INFO 03-02 00:10:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:06 [logger.py:42] Received request cmpl-653e782e4bbd4f5899c71ab39d961787-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:06 [async_llm.py:261] Added request cmpl-653e782e4bbd4f5899c71ab39d961787-0.
INFO 03-02 00:10:07 [logger.py:42] Received request cmpl-bb5c460466bf4f7e9beaa55e7d3ec5b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:07 [async_llm.py:261] Added request cmpl-bb5c460466bf4f7e9beaa55e7d3ec5b4-0.
INFO 03-02 00:10:08 [logger.py:42] Received request cmpl-904760f4011a4f70b18fdc4d3ef1157a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:08 [async_llm.py:261] Added request cmpl-904760f4011a4f70b18fdc4d3ef1157a-0.
INFO 03-02 00:10:09 [logger.py:42] Received request cmpl-1be8ffc8c8e04b868179d1fbae9a68d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:09 [async_llm.py:261] Added request cmpl-1be8ffc8c8e04b868179d1fbae9a68d6-0.
INFO 03-02 00:10:10 [logger.py:42] Received request cmpl-a8d7417d6b984ce781f861eaa403464b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:10 [async_llm.py:261] Added request cmpl-a8d7417d6b984ce781f861eaa403464b-0.
INFO 03-02 00:10:11 [logger.py:42] Received request cmpl-38379aacfb844a76aa071d64555e11bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:11 [async_llm.py:261] Added request cmpl-38379aacfb844a76aa071d64555e11bd-0.
INFO 03-02 00:10:13 [logger.py:42] Received request cmpl-80076c99d1a8412aaa812d58ac844adb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:13 [async_llm.py:261] Added request cmpl-80076c99d1a8412aaa812d58ac844adb-0.
INFO 03-02 00:10:14 [logger.py:42] Received request cmpl-44f1c9434dde4142b7d40399fa056b0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:14 [async_llm.py:261] Added request cmpl-44f1c9434dde4142b7d40399fa056b0d-0.
INFO 03-02 00:10:15 [logger.py:42] Received request cmpl-564961cd67c84dd5b5a16cf531ba23f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:15 [async_llm.py:261] Added request cmpl-564961cd67c84dd5b5a16cf531ba23f4-0.
INFO 03-02 00:10:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:16 [logger.py:42] Received request cmpl-852c842771834200bb8e4a9105e5c69b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:16 [async_llm.py:261] Added request cmpl-852c842771834200bb8e4a9105e5c69b-0.
INFO 03-02 00:10:17 [logger.py:42] Received request cmpl-aea9edb54c2a48f5b42fbb10dffbfb98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:17 [async_llm.py:261] Added request cmpl-aea9edb54c2a48f5b42fbb10dffbfb98-0.
INFO 03-02 00:10:18 [logger.py:42] Received request cmpl-0d45a53e17d74afea744fbf107dd36e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:18 [async_llm.py:261] Added request cmpl-0d45a53e17d74afea744fbf107dd36e0-0.
INFO 03-02 00:10:19 [logger.py:42] Received request cmpl-cc377c7cf8f5426e93312f955dee4964-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:19 [async_llm.py:261] Added request cmpl-cc377c7cf8f5426e93312f955dee4964-0.
INFO 03-02 00:10:20 [logger.py:42] Received request cmpl-eebe16508b8d48128d3ac11101232d08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:20 [async_llm.py:261] Added request cmpl-eebe16508b8d48128d3ac11101232d08-0.
INFO 03-02 00:10:21 [logger.py:42] Received request cmpl-55d72bb1ca1546edbd2ddcc0ac4c222f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:21 [async_llm.py:261] Added request cmpl-55d72bb1ca1546edbd2ddcc0ac4c222f-0.
INFO 03-02 00:10:22 [logger.py:42] Received request cmpl-c748dcbe97e04be89322a10652415634-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:22 [async_llm.py:261] Added request cmpl-c748dcbe97e04be89322a10652415634-0.
INFO 03-02 00:10:24 [logger.py:42] Received request cmpl-b12f859f75d14825bec95714b7a11081-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:24 [async_llm.py:261] Added request cmpl-b12f859f75d14825bec95714b7a11081-0.
INFO 03-02 00:10:25 [logger.py:42] Received request cmpl-db8c0afc830c43508ea248506c5e063b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:25 [async_llm.py:261] Added request cmpl-db8c0afc830c43508ea248506c5e063b-0.
INFO 03-02 00:10:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:26 [logger.py:42] Received request cmpl-ad8966c18c584a7da1627f4d108d56b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:26 [async_llm.py:261] Added request cmpl-ad8966c18c584a7da1627f4d108d56b4-0.
INFO 03-02 00:10:27 [logger.py:42] Received request cmpl-4debb954f7974eb187db54b35c6a9f5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:27 [async_llm.py:261] Added request cmpl-4debb954f7974eb187db54b35c6a9f5a-0.
INFO 03-02 00:10:28 [logger.py:42] Received request cmpl-51da4f68a70a4c3eb7e271ec186ba0f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:28 [async_llm.py:261] Added request cmpl-51da4f68a70a4c3eb7e271ec186ba0f8-0.
INFO 03-02 00:10:29 [logger.py:42] Received request cmpl-d47291e2828843b4befea25741472bb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:29 [async_llm.py:261] Added request cmpl-d47291e2828843b4befea25741472bb4-0.
INFO 03-02 00:10:30 [logger.py:42] Received request cmpl-11d0f0bc2737411c8ea41afc0ec184bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:30 [async_llm.py:261] Added request cmpl-11d0f0bc2737411c8ea41afc0ec184bb-0.
INFO 03-02 00:10:31 [logger.py:42] Received request cmpl-a61567fc367140f89c080ae30c3a6b01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:31 [async_llm.py:261] Added request cmpl-a61567fc367140f89c080ae30c3a6b01-0.
INFO 03-02 00:10:32 [logger.py:42] Received request cmpl-f2581151c5814126b35f1f1796a76e41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:32 [async_llm.py:261] Added request cmpl-f2581151c5814126b35f1f1796a76e41-0.
INFO 03-02 00:10:33 [logger.py:42] Received request cmpl-185c4503aebe4ce8ae2fa77bdd94f40b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:33 [async_llm.py:261] Added request cmpl-185c4503aebe4ce8ae2fa77bdd94f40b-0.
INFO 03-02 00:10:34 [logger.py:42] Received request cmpl-baf1cba7f35348a6ba2f55211b9164a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:34 [async_llm.py:261] Added request cmpl-baf1cba7f35348a6ba2f55211b9164a3-0.
INFO 03-02 00:10:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:36 [logger.py:42] Received request cmpl-1b0198ca37994b9fbab1f842a157ee68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:36 [async_llm.py:261] Added request cmpl-1b0198ca37994b9fbab1f842a157ee68-0.
INFO 03-02 00:10:37 [logger.py:42] Received request cmpl-80fb3ac373af45ca945f4479dd57890f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:37 [async_llm.py:261] Added request cmpl-80fb3ac373af45ca945f4479dd57890f-0.
INFO 03-02 00:10:38 [logger.py:42] Received request cmpl-07798644b2da4780be649c51d6636095-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:38 [async_llm.py:261] Added request cmpl-07798644b2da4780be649c51d6636095-0.
INFO 03-02 00:10:39 [logger.py:42] Received request cmpl-3a337f38cd854f658cd65d8713dbde44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:39 [async_llm.py:261] Added request cmpl-3a337f38cd854f658cd65d8713dbde44-0.
INFO 03-02 00:10:40 [logger.py:42] Received request cmpl-f572e240b41c436bbf10bbd9141d7ab2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:40 [async_llm.py:261] Added request cmpl-f572e240b41c436bbf10bbd9141d7ab2-0.
INFO 03-02 00:10:41 [logger.py:42] Received request cmpl-b4a8d79fe6de4bdda5d01f32a291eb7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:41 [async_llm.py:261] Added request cmpl-b4a8d79fe6de4bdda5d01f32a291eb7b-0.
INFO 03-02 00:10:42 [logger.py:42] Received request cmpl-33fdb728bacb45d5b6fcfb391db49841-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:42 [async_llm.py:261] Added request cmpl-33fdb728bacb45d5b6fcfb391db49841-0.
INFO 03-02 00:10:43 [logger.py:42] Received request cmpl-2e02aca9c7f0488181bf6dde0e2ef046-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:43 [async_llm.py:261] Added request cmpl-2e02aca9c7f0488181bf6dde0e2ef046-0.
INFO 03-02 00:10:44 [logger.py:42] Received request cmpl-6beb797a4ba74abfa503bef9ac4dffaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:44 [async_llm.py:261] Added request cmpl-6beb797a4ba74abfa503bef9ac4dffaf-0.
INFO 03-02 00:10:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:45 [logger.py:42] Received request cmpl-cc9dcdd308a44ce1be7c266ae9dd2b43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:45 [async_llm.py:261] Added request cmpl-cc9dcdd308a44ce1be7c266ae9dd2b43-0.
INFO 03-02 00:10:47 [logger.py:42] Received request cmpl-3f83d7658e48469eb8780f9366eabfac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:47 [async_llm.py:261] Added request cmpl-3f83d7658e48469eb8780f9366eabfac-0.
INFO 03-02 00:10:48 [logger.py:42] Received request cmpl-6b71468e656b4cf0a1f00da88a23edd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:48 [async_llm.py:261] Added request cmpl-6b71468e656b4cf0a1f00da88a23edd1-0.
INFO 03-02 00:10:49 [logger.py:42] Received request cmpl-bf8628d4c157412585d34ba22b6fad92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:49 [async_llm.py:261] Added request cmpl-bf8628d4c157412585d34ba22b6fad92-0.
INFO 03-02 00:10:50 [logger.py:42] Received request cmpl-daa13a058b6244d0bf9c5a9e4aa80ffc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:50 [async_llm.py:261] Added request cmpl-daa13a058b6244d0bf9c5a9e4aa80ffc-0.
INFO 03-02 00:10:51 [logger.py:42] Received request cmpl-80553501eb004e968e2a2f16d33c3649-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:51 [async_llm.py:261] Added request cmpl-80553501eb004e968e2a2f16d33c3649-0.
INFO 03-02 00:10:52 [logger.py:42] Received request cmpl-8df8e6a3f6e64919a944ea0500d9dafa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:52 [async_llm.py:261] Added request cmpl-8df8e6a3f6e64919a944ea0500d9dafa-0.
INFO 03-02 00:10:53 [logger.py:42] Received request cmpl-61200337fcce4b51a71232ba35ce6aa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:53 [async_llm.py:261] Added request cmpl-61200337fcce4b51a71232ba35ce6aa3-0.
INFO 03-02 00:10:54 [logger.py:42] Received request cmpl-35df6f8ef5634d22afeb903b7d3e668d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:54 [async_llm.py:261] Added request cmpl-35df6f8ef5634d22afeb903b7d3e668d-0.
INFO 03-02 00:10:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:10:55 [logger.py:42] Received request cmpl-a1bf9db6fc494284939e17bf0436e4a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:55 [async_llm.py:261] Added request cmpl-a1bf9db6fc494284939e17bf0436e4a0-0.
INFO 03-02 00:10:56 [logger.py:42] Received request cmpl-9cdf87f752b84205aca74aba3539ca7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:56 [async_llm.py:261] Added request cmpl-9cdf87f752b84205aca74aba3539ca7d-0.
INFO 03-02 00:10:57 [logger.py:42] Received request cmpl-e2024cf095af48fab713abd73cd51055-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:57 [async_llm.py:261] Added request cmpl-e2024cf095af48fab713abd73cd51055-0.
INFO 03-02 00:10:59 [logger.py:42] Received request cmpl-8d0442ceb7e34ff5a39c8053b85f3709-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:10:59 [async_llm.py:261] Added request cmpl-8d0442ceb7e34ff5a39c8053b85f3709-0.
INFO 03-02 00:11:00 [logger.py:42] Received request cmpl-dc12cd64f7a3459f96966d4eef7c95fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:00 [async_llm.py:261] Added request cmpl-dc12cd64f7a3459f96966d4eef7c95fa-0.
INFO 03-02 00:11:01 [logger.py:42] Received request cmpl-bb544b2ba9cd46808efaf7968adc2a33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:01 [async_llm.py:261] Added request cmpl-bb544b2ba9cd46808efaf7968adc2a33-0.
INFO 03-02 00:11:02 [logger.py:42] Received request cmpl-83054935da77439fa0a1900626ed00cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:02 [async_llm.py:261] Added request cmpl-83054935da77439fa0a1900626ed00cd-0.
INFO 03-02 00:11:03 [logger.py:42] Received request cmpl-3b8b9bbc60a8412da44ac98b0cc1dcad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:03 [async_llm.py:261] Added request cmpl-3b8b9bbc60a8412da44ac98b0cc1dcad-0.
INFO 03-02 00:11:04 [logger.py:42] Received request cmpl-991a96a8eb58475c9296e50e8e5f7b5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:04 [async_llm.py:261] Added request cmpl-991a96a8eb58475c9296e50e8e5f7b5b-0.
INFO 03-02 00:11:05 [logger.py:42] Received request cmpl-2fbda6fb0a90487e8d25d611089b20f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:05 [async_llm.py:261] Added request cmpl-2fbda6fb0a90487e8d25d611089b20f6-0.
INFO 03-02 00:11:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:06 [logger.py:42] Received request cmpl-6dbd1dfa4796473c9606c4ea9eef5971-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:06 [async_llm.py:261] Added request cmpl-6dbd1dfa4796473c9606c4ea9eef5971-0.
INFO 03-02 00:11:07 [logger.py:42] Received request cmpl-c165e0f75b854cd3844f31f6a3a54ece-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:07 [async_llm.py:261] Added request cmpl-c165e0f75b854cd3844f31f6a3a54ece-0.
INFO 03-02 00:11:08 [logger.py:42] Received request cmpl-5b7ea11455744b4f84f8a9aeeb12b879-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:08 [async_llm.py:261] Added request cmpl-5b7ea11455744b4f84f8a9aeeb12b879-0.
INFO 03-02 00:11:09 [logger.py:42] Received request cmpl-1d789395bdcc40ed899b49011d17fe23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:09 [async_llm.py:261] Added request cmpl-1d789395bdcc40ed899b49011d17fe23-0.
INFO 03-02 00:11:11 [logger.py:42] Received request cmpl-94e52a1a89724c43ae3ee8ecda27db32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:11 [async_llm.py:261] Added request cmpl-94e52a1a89724c43ae3ee8ecda27db32-0.
INFO 03-02 00:11:12 [logger.py:42] Received request cmpl-817a00a6c31146508d4a9be17b93f3cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:12 [async_llm.py:261] Added request cmpl-817a00a6c31146508d4a9be17b93f3cb-0.
INFO 03-02 00:11:13 [logger.py:42] Received request cmpl-bcbf4cae1d31450db1c1b2529fc4e24e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:13 [async_llm.py:261] Added request cmpl-bcbf4cae1d31450db1c1b2529fc4e24e-0.
INFO 03-02 00:11:14 [logger.py:42] Received request cmpl-1865148c127c4d9d84deba3e2af1204b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:14 [async_llm.py:261] Added request cmpl-1865148c127c4d9d84deba3e2af1204b-0.
INFO 03-02 00:11:15 [logger.py:42] Received request cmpl-c0d48541c75d4573a0d16d9711625dfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:15 [async_llm.py:261] Added request cmpl-c0d48541c75d4573a0d16d9711625dfc-0.
INFO 03-02 00:11:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:16 [logger.py:42] Received request cmpl-1426c1f32ad14511ab9c36f017bf75f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:16 [async_llm.py:261] Added request cmpl-1426c1f32ad14511ab9c36f017bf75f4-0.
INFO 03-02 00:11:17 [logger.py:42] Received request cmpl-5056e61439cc4100ad0baf032b27278a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:17 [async_llm.py:261] Added request cmpl-5056e61439cc4100ad0baf032b27278a-0.
INFO 03-02 00:11:18 [logger.py:42] Received request cmpl-a4b18a9b7c454529b7560c57618b61cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:18 [async_llm.py:261] Added request cmpl-a4b18a9b7c454529b7560c57618b61cd-0.
INFO 03-02 00:11:19 [logger.py:42] Received request cmpl-98d26906de7a42a99afad45497a355e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:19 [async_llm.py:261] Added request cmpl-98d26906de7a42a99afad45497a355e2-0.
INFO 03-02 00:11:20 [logger.py:42] Received request cmpl-8cc6c16fb5e5474bb8e2ad7721981976-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:20 [async_llm.py:261] Added request cmpl-8cc6c16fb5e5474bb8e2ad7721981976-0.
INFO 03-02 00:11:22 [logger.py:42] Received request cmpl-e9a63a94b7374ebdb44d37bda29e2c1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:22 [async_llm.py:261] Added request cmpl-e9a63a94b7374ebdb44d37bda29e2c1b-0.
INFO 03-02 00:11:23 [logger.py:42] Received request cmpl-9b99007a979444f0bd41e50c1dccaffd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:23 [async_llm.py:261] Added request cmpl-9b99007a979444f0bd41e50c1dccaffd-0.
INFO 03-02 00:11:24 [logger.py:42] Received request cmpl-00255a079bdf40c5b75211cd561765f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:24 [async_llm.py:261] Added request cmpl-00255a079bdf40c5b75211cd561765f7-0.
INFO 03-02 00:11:25 [logger.py:42] Received request cmpl-7fccceefd1c9479db169bc0061ee5f38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:25 [async_llm.py:261] Added request cmpl-7fccceefd1c9479db169bc0061ee5f38-0.
INFO 03-02 00:11:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:26 [logger.py:42] Received request cmpl-ff1b3bfff1cd4283aeb7a80381e59a7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:26 [async_llm.py:261] Added request cmpl-ff1b3bfff1cd4283aeb7a80381e59a7c-0.
INFO 03-02 00:11:27 [logger.py:42] Received request cmpl-ae92c4594c25426cbf78bed777f2fd08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:27 [async_llm.py:261] Added request cmpl-ae92c4594c25426cbf78bed777f2fd08-0.
INFO 03-02 00:11:28 [logger.py:42] Received request cmpl-9e5011eb94c74b158680098e0f2fb09b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:28 [async_llm.py:261] Added request cmpl-9e5011eb94c74b158680098e0f2fb09b-0.
INFO 03-02 00:11:29 [logger.py:42] Received request cmpl-b625df92054f4d00afb99456c61fe86e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:29 [async_llm.py:261] Added request cmpl-b625df92054f4d00afb99456c61fe86e-0.
INFO 03-02 00:11:30 [logger.py:42] Received request cmpl-5edfcb98ce074f4d8757216a2e6f59d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:30 [async_llm.py:261] Added request cmpl-5edfcb98ce074f4d8757216a2e6f59d7-0.
INFO 03-02 00:11:31 [logger.py:42] Received request cmpl-0fcae71a78894a9eaa683812dcb18fd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:31 [async_llm.py:261] Added request cmpl-0fcae71a78894a9eaa683812dcb18fd3-0.
INFO 03-02 00:11:32 [logger.py:42] Received request cmpl-809938d640f9443ab5edbebc55acce81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:32 [async_llm.py:261] Added request cmpl-809938d640f9443ab5edbebc55acce81-0.
INFO 03-02 00:11:34 [logger.py:42] Received request cmpl-9955a81bdfbb42c99d6ed0437d1129bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:34 [async_llm.py:261] Added request cmpl-9955a81bdfbb42c99d6ed0437d1129bf-0.
INFO 03-02 00:11:35 [logger.py:42] Received request cmpl-30feb483099a4543b597a353dd2cd78d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:35 [async_llm.py:261] Added request cmpl-30feb483099a4543b597a353dd2cd78d-0.
INFO 03-02 00:11:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:36 [logger.py:42] Received request cmpl-f23881d3bf9843e2b1fe5c391ad5d996-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:36 [async_llm.py:261] Added request cmpl-f23881d3bf9843e2b1fe5c391ad5d996-0.
INFO 03-02 00:11:37 [logger.py:42] Received request cmpl-e89ba7a6c96a4e43bc7ec7abef5580b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:37 [async_llm.py:261] Added request cmpl-e89ba7a6c96a4e43bc7ec7abef5580b0-0.
INFO 03-02 00:11:38 [logger.py:42] Received request cmpl-295c248b89874935b3c940abfe99a89e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:38 [async_llm.py:261] Added request cmpl-295c248b89874935b3c940abfe99a89e-0.
INFO 03-02 00:11:39 [logger.py:42] Received request cmpl-9b2cc537960d4111b89a9998d61c8b2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:39 [async_llm.py:261] Added request cmpl-9b2cc537960d4111b89a9998d61c8b2a-0.
INFO 03-02 00:11:40 [logger.py:42] Received request cmpl-b660da59d4f943c7b58b63dab3ad862f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:40 [async_llm.py:261] Added request cmpl-b660da59d4f943c7b58b63dab3ad862f-0.
INFO 03-02 00:11:41 [logger.py:42] Received request cmpl-83a80928eafd4486936757a656ab22ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:41 [async_llm.py:261] Added request cmpl-83a80928eafd4486936757a656ab22ba-0.
INFO 03-02 00:11:42 [logger.py:42] Received request cmpl-17999a5fb8d14e0093312b72ffff7ca4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:42 [async_llm.py:261] Added request cmpl-17999a5fb8d14e0093312b72ffff7ca4-0.
INFO 03-02 00:11:43 [logger.py:42] Received request cmpl-f8c814ae25c34f9694707b0df8ae4ca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:43 [async_llm.py:261] Added request cmpl-f8c814ae25c34f9694707b0df8ae4ca9-0.
INFO 03-02 00:11:45 [logger.py:42] Received request cmpl-c465444f1b9648db84b4e0793d2773d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:45 [async_llm.py:261] Added request cmpl-c465444f1b9648db84b4e0793d2773d9-0.
INFO 03-02 00:11:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:46 [logger.py:42] Received request cmpl-690c11039645428cb19e97c5843f6ec3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:46 [async_llm.py:261] Added request cmpl-690c11039645428cb19e97c5843f6ec3-0.
INFO 03-02 00:11:47 [logger.py:42] Received request cmpl-13827380a9af4a7b9ac44ae6eb1c8736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:47 [async_llm.py:261] Added request cmpl-13827380a9af4a7b9ac44ae6eb1c8736-0.
INFO 03-02 00:11:48 [logger.py:42] Received request cmpl-b529b6ac401e42c0b7302d5343389e39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:48 [async_llm.py:261] Added request cmpl-b529b6ac401e42c0b7302d5343389e39-0.
INFO 03-02 00:11:49 [logger.py:42] Received request cmpl-023074cab3544161956cfaad8c5e3536-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:49 [async_llm.py:261] Added request cmpl-023074cab3544161956cfaad8c5e3536-0.
INFO 03-02 00:11:50 [logger.py:42] Received request cmpl-5cecce96fc5c4e14b9364a5a8c77cfb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:50 [async_llm.py:261] Added request cmpl-5cecce96fc5c4e14b9364a5a8c77cfb2-0.
INFO 03-02 00:11:51 [logger.py:42] Received request cmpl-17863e3abd864c06950dad7a7c0caa75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:51 [async_llm.py:261] Added request cmpl-17863e3abd864c06950dad7a7c0caa75-0.
INFO 03-02 00:11:52 [logger.py:42] Received request cmpl-6c72404e9fbd4cfe8225edf288b6738d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:52 [async_llm.py:261] Added request cmpl-6c72404e9fbd4cfe8225edf288b6738d-0.
INFO 03-02 00:11:53 [logger.py:42] Received request cmpl-c0443555bf83421b98b6a64650c73442-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:53 [async_llm.py:261] Added request cmpl-c0443555bf83421b98b6a64650c73442-0.
INFO 03-02 00:11:54 [logger.py:42] Received request cmpl-7621855a369c4a318f89ff156eb9a268-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:54 [async_llm.py:261] Added request cmpl-7621855a369c4a318f89ff156eb9a268-0.
INFO 03-02 00:11:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:11:56 [logger.py:42] Received request cmpl-82fa01ec0994463db0a7b41af27a5289-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:56 [async_llm.py:261] Added request cmpl-82fa01ec0994463db0a7b41af27a5289-0.
INFO 03-02 00:11:57 [logger.py:42] Received request cmpl-d18622ceee594e81b2e437d871b4f6c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:57 [async_llm.py:261] Added request cmpl-d18622ceee594e81b2e437d871b4f6c5-0.
INFO 03-02 00:11:58 [logger.py:42] Received request cmpl-dd6f9bef6dad4c17ae9a8cb5e20806f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:58 [async_llm.py:261] Added request cmpl-dd6f9bef6dad4c17ae9a8cb5e20806f7-0.
INFO 03-02 00:11:59 [logger.py:42] Received request cmpl-733b1f22fed04a9fa1c4566568985761-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:11:59 [async_llm.py:261] Added request cmpl-733b1f22fed04a9fa1c4566568985761-0.
INFO 03-02 00:12:00 [logger.py:42] Received request cmpl-7663e83999df496297f363e72319aa61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:00 [async_llm.py:261] Added request cmpl-7663e83999df496297f363e72319aa61-0.
INFO 03-02 00:12:01 [logger.py:42] Received request cmpl-556a758be72f4ab18b87d91fb3a6ab41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:01 [async_llm.py:261] Added request cmpl-556a758be72f4ab18b87d91fb3a6ab41-0.
INFO 03-02 00:12:02 [logger.py:42] Received request cmpl-1ea3fc53e2a54a3b962e011199db3aae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:02 [async_llm.py:261] Added request cmpl-1ea3fc53e2a54a3b962e011199db3aae-0.
INFO 03-02 00:12:03 [logger.py:42] Received request cmpl-25286ffb20f8482e8b4e88acb04d8eff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:03 [async_llm.py:261] Added request cmpl-25286ffb20f8482e8b4e88acb04d8eff-0.
INFO 03-02 00:12:04 [logger.py:42] Received request cmpl-4924f78d6bcf4ba68363615df826d2ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:04 [async_llm.py:261] Added request cmpl-4924f78d6bcf4ba68363615df826d2ef-0.
INFO 03-02 00:12:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:05 [logger.py:42] Received request cmpl-1b3e0706814f43bc9566ccaac073ad78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:05 [async_llm.py:261] Added request cmpl-1b3e0706814f43bc9566ccaac073ad78-0.
INFO 03-02 00:12:06 [logger.py:42] Received request cmpl-e7912d8543cf41839112bcd06a5681c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:06 [async_llm.py:261] Added request cmpl-e7912d8543cf41839112bcd06a5681c1-0.
INFO 03-02 00:12:08 [logger.py:42] Received request cmpl-f6ac3bc51ee242f8b16c12e379a5cf6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:08 [async_llm.py:261] Added request cmpl-f6ac3bc51ee242f8b16c12e379a5cf6d-0.
INFO 03-02 00:12:09 [logger.py:42] Received request cmpl-55f327f79b604e9099233d31c951e6d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:09 [async_llm.py:261] Added request cmpl-55f327f79b604e9099233d31c951e6d4-0.
INFO 03-02 00:12:10 [logger.py:42] Received request cmpl-0ac3b6de3b3e488a853335f4fe0043f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:10 [async_llm.py:261] Added request cmpl-0ac3b6de3b3e488a853335f4fe0043f5-0.
INFO 03-02 00:12:11 [logger.py:42] Received request cmpl-0bdca86f6cf3495dad9e97a40582f713-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:11 [async_llm.py:261] Added request cmpl-0bdca86f6cf3495dad9e97a40582f713-0.
INFO 03-02 00:12:12 [logger.py:42] Received request cmpl-c5c7b09332c34dd38614d42bd3106b13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:12 [async_llm.py:261] Added request cmpl-c5c7b09332c34dd38614d42bd3106b13-0.
INFO 03-02 00:12:13 [logger.py:42] Received request cmpl-42bbf555a28447de9380d91a9eb5db56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:13 [async_llm.py:261] Added request cmpl-42bbf555a28447de9380d91a9eb5db56-0.
INFO 03-02 00:12:14 [logger.py:42] Received request cmpl-cadb17bf65024ea3ada6a60e89f53319-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:14 [async_llm.py:261] Added request cmpl-cadb17bf65024ea3ada6a60e89f53319-0.
INFO 03-02 00:12:15 [logger.py:42] Received request cmpl-77919f7053d443129b0fe070598d0074-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:15 [async_llm.py:261] Added request cmpl-77919f7053d443129b0fe070598d0074-0.
INFO 03-02 00:12:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:16 [logger.py:42] Received request cmpl-9cf2c4feb3d94b0998abeb7849369634-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:16 [async_llm.py:261] Added request cmpl-9cf2c4feb3d94b0998abeb7849369634-0.
INFO 03-02 00:12:17 [logger.py:42] Received request cmpl-cf2e8cdb00dc423db458f3e04406724c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:17 [async_llm.py:261] Added request cmpl-cf2e8cdb00dc423db458f3e04406724c-0.
INFO 03-02 00:12:19 [logger.py:42] Received request cmpl-6dd046c9dc1f4496b4a0cab4f8276f0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:19 [async_llm.py:261] Added request cmpl-6dd046c9dc1f4496b4a0cab4f8276f0c-0.
INFO 03-02 00:12:20 [logger.py:42] Received request cmpl-0c7394ad35c74638aea64f3263688a74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:20 [async_llm.py:261] Added request cmpl-0c7394ad35c74638aea64f3263688a74-0.
INFO 03-02 00:12:21 [logger.py:42] Received request cmpl-b0830caf9c854f66892173d650f95c70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:21 [async_llm.py:261] Added request cmpl-b0830caf9c854f66892173d650f95c70-0.
INFO 03-02 00:12:22 [logger.py:42] Received request cmpl-565dd3d126fa42c88d09b0cc4c8b8ce2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:22 [async_llm.py:261] Added request cmpl-565dd3d126fa42c88d09b0cc4c8b8ce2-0.
INFO 03-02 00:12:23 [logger.py:42] Received request cmpl-1d86b68344594e8a9c52da4b20681f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:23 [async_llm.py:261] Added request cmpl-1d86b68344594e8a9c52da4b20681f9b-0.
INFO 03-02 00:12:24 [logger.py:42] Received request cmpl-f3008ba3d649409a8dc544de76023afe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:24 [async_llm.py:261] Added request cmpl-f3008ba3d649409a8dc544de76023afe-0.
INFO 03-02 00:12:25 [logger.py:42] Received request cmpl-6832db33ccdd4563b7a3da22ae4121a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:25 [async_llm.py:261] Added request cmpl-6832db33ccdd4563b7a3da22ae4121a5-0.
INFO 03-02 00:12:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:26 [logger.py:42] Received request cmpl-ff494c51174040d48f4e3959fe464a03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:26 [async_llm.py:261] Added request cmpl-ff494c51174040d48f4e3959fe464a03-0.
INFO 03-02 00:12:27 [logger.py:42] Received request cmpl-fda68597b37041f09f0a463a305b52da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:27 [async_llm.py:261] Added request cmpl-fda68597b37041f09f0a463a305b52da-0.
INFO 03-02 00:12:28 [logger.py:42] Received request cmpl-df7a14852ba94b0791743bf4322146eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:28 [async_llm.py:261] Added request cmpl-df7a14852ba94b0791743bf4322146eb-0.
INFO 03-02 00:12:29 [logger.py:42] Received request cmpl-f2307115aa9a44858b881dafc08fc10f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:29 [async_llm.py:261] Added request cmpl-f2307115aa9a44858b881dafc08fc10f-0.
INFO 03-02 00:12:31 [logger.py:42] Received request cmpl-4c24f77c3c624bbd8c0b782a30d1810f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:31 [async_llm.py:261] Added request cmpl-4c24f77c3c624bbd8c0b782a30d1810f-0.
INFO 03-02 00:12:32 [logger.py:42] Received request cmpl-0fef52ebaa5b4b7f89d7bd12f125ea47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:32 [async_llm.py:261] Added request cmpl-0fef52ebaa5b4b7f89d7bd12f125ea47-0.
INFO 03-02 00:12:33 [logger.py:42] Received request cmpl-8ea420d9e9414624a8aaf3bf8e7055b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:33 [async_llm.py:261] Added request cmpl-8ea420d9e9414624a8aaf3bf8e7055b9-0.
INFO 03-02 00:12:34 [logger.py:42] Received request cmpl-5e16d4588aae45b78e2a0bca9d3a3603-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:34 [async_llm.py:261] Added request cmpl-5e16d4588aae45b78e2a0bca9d3a3603-0.
INFO 03-02 00:12:35 [logger.py:42] Received request cmpl-bfa15c29fd9f4cfab15dc62eb2c33597-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:35 [async_llm.py:261] Added request cmpl-bfa15c29fd9f4cfab15dc62eb2c33597-0.
INFO 03-02 00:12:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:36 [logger.py:42] Received request cmpl-1d32060b3b0b4ca593cfcf5162645b0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:36 [async_llm.py:261] Added request cmpl-1d32060b3b0b4ca593cfcf5162645b0e-0.
INFO 03-02 00:12:37 [logger.py:42] Received request cmpl-fd2b8198a84d42ab8e571aa5572ee835-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:37 [async_llm.py:261] Added request cmpl-fd2b8198a84d42ab8e571aa5572ee835-0.
INFO 03-02 00:12:38 [logger.py:42] Received request cmpl-5b608a3d89344a70943714aaf0eb83bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:38 [async_llm.py:261] Added request cmpl-5b608a3d89344a70943714aaf0eb83bf-0.
INFO 03-02 00:12:39 [logger.py:42] Received request cmpl-c6bbd53af0d74da98ad5e7f3a3e88426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:39 [async_llm.py:261] Added request cmpl-c6bbd53af0d74da98ad5e7f3a3e88426-0.
INFO 03-02 00:12:40 [logger.py:42] Received request cmpl-d605b30c7ad94e23870c675566810d58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:40 [async_llm.py:261] Added request cmpl-d605b30c7ad94e23870c675566810d58-0.
INFO 03-02 00:12:42 [logger.py:42] Received request cmpl-14a9ed30fbd34a208d0d70624456413e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:42 [async_llm.py:261] Added request cmpl-14a9ed30fbd34a208d0d70624456413e-0.
INFO 03-02 00:12:43 [logger.py:42] Received request cmpl-7a271a97d9774d1c834c85528c0818c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:43 [async_llm.py:261] Added request cmpl-7a271a97d9774d1c834c85528c0818c4-0.
INFO 03-02 00:12:44 [logger.py:42] Received request cmpl-a1d4b112d58a4292a77905c8df8ce5de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:44 [async_llm.py:261] Added request cmpl-a1d4b112d58a4292a77905c8df8ce5de-0.
INFO 03-02 00:12:45 [logger.py:42] Received request cmpl-a1fe8ebccb474e06bb4626175237a1fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:45 [async_llm.py:261] Added request cmpl-a1fe8ebccb474e06bb4626175237a1fe-0.
INFO 03-02 00:12:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:46 [logger.py:42] Received request cmpl-9baa4dfd1c644964b0fe9d3eb6b29dcb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:46 [async_llm.py:261] Added request cmpl-9baa4dfd1c644964b0fe9d3eb6b29dcb-0.
INFO 03-02 00:12:47 [logger.py:42] Received request cmpl-81cc3da73ef449bfac29b817921743fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:47 [async_llm.py:261] Added request cmpl-81cc3da73ef449bfac29b817921743fc-0.
INFO 03-02 00:12:48 [logger.py:42] Received request cmpl-d2ac8a89a9e042ad9fd44d849f2a4dce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:48 [async_llm.py:261] Added request cmpl-d2ac8a89a9e042ad9fd44d849f2a4dce-0.
INFO 03-02 00:12:49 [logger.py:42] Received request cmpl-2ad428f4fb8f47d796cee23cf5c147da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:49 [async_llm.py:261] Added request cmpl-2ad428f4fb8f47d796cee23cf5c147da-0.
INFO 03-02 00:12:50 [logger.py:42] Received request cmpl-0eda518c239040e3b150376220eea081-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:50 [async_llm.py:261] Added request cmpl-0eda518c239040e3b150376220eea081-0.
INFO 03-02 00:12:51 [logger.py:42] Received request cmpl-82cd13f1947f453d8ea1a17bec975403-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:51 [async_llm.py:261] Added request cmpl-82cd13f1947f453d8ea1a17bec975403-0.
INFO 03-02 00:12:52 [logger.py:42] Received request cmpl-fa23c7d950744eadad8252d0b5b7bd48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:52 [async_llm.py:261] Added request cmpl-fa23c7d950744eadad8252d0b5b7bd48-0.
INFO 03-02 00:12:54 [logger.py:42] Received request cmpl-f1c3a16f5443441ba6fccf99d660f257-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:54 [async_llm.py:261] Added request cmpl-f1c3a16f5443441ba6fccf99d660f257-0.
INFO 03-02 00:12:55 [logger.py:42] Received request cmpl-b3032a6991324f61a591ca2950e6b65d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:55 [async_llm.py:261] Added request cmpl-b3032a6991324f61a591ca2950e6b65d-0.
INFO 03-02 00:12:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:12:56 [logger.py:42] Received request cmpl-fe8cb745c8c54756a112ea3404e67b32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:56 [async_llm.py:261] Added request cmpl-fe8cb745c8c54756a112ea3404e67b32-0.
INFO 03-02 00:12:57 [logger.py:42] Received request cmpl-eaeecf8a231649bbbf91093d03d1e011-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:57 [async_llm.py:261] Added request cmpl-eaeecf8a231649bbbf91093d03d1e011-0.
INFO 03-02 00:12:58 [logger.py:42] Received request cmpl-639ee581472c4edf991841174ccf6db7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:58 [async_llm.py:261] Added request cmpl-639ee581472c4edf991841174ccf6db7-0.
INFO 03-02 00:12:59 [logger.py:42] Received request cmpl-2e2780c3889d4537b42d3578b8e6b2a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:12:59 [async_llm.py:261] Added request cmpl-2e2780c3889d4537b42d3578b8e6b2a3-0.
INFO 03-02 00:13:00 [logger.py:42] Received request cmpl-fb95fabed73f48d5ad2ef43f233293ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:00 [async_llm.py:261] Added request cmpl-fb95fabed73f48d5ad2ef43f233293ad-0.
INFO 03-02 00:13:01 [logger.py:42] Received request cmpl-437406cbacfc46efb2cb4e8e15053dbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:01 [async_llm.py:261] Added request cmpl-437406cbacfc46efb2cb4e8e15053dbe-0.
INFO 03-02 00:13:02 [logger.py:42] Received request cmpl-13b9e84ea1514baca6bf9a18904ebff3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:02 [async_llm.py:261] Added request cmpl-13b9e84ea1514baca6bf9a18904ebff3-0.
INFO 03-02 00:13:03 [logger.py:42] Received request cmpl-32e9d44728b84979ac646e57f6a38d10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:03 [async_llm.py:261] Added request cmpl-32e9d44728b84979ac646e57f6a38d10-0.
INFO 03-02 00:13:05 [logger.py:42] Received request cmpl-cc26fccc37ff40cb9118beb80a59e254-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:05 [async_llm.py:261] Added request cmpl-cc26fccc37ff40cb9118beb80a59e254-0.
INFO 03-02 00:13:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:06 [logger.py:42] Received request cmpl-fbc705d5b1f34219b397ecbe92c84ed2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:06 [async_llm.py:261] Added request cmpl-fbc705d5b1f34219b397ecbe92c84ed2-0.
INFO 03-02 00:13:07 [logger.py:42] Received request cmpl-c720a4a4516f40e8a0fd974d318b0e47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:07 [async_llm.py:261] Added request cmpl-c720a4a4516f40e8a0fd974d318b0e47-0.
INFO 03-02 00:13:08 [logger.py:42] Received request cmpl-5aab3f2ce5bc4225960cb9f99e31cc3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:08 [async_llm.py:261] Added request cmpl-5aab3f2ce5bc4225960cb9f99e31cc3c-0.
INFO 03-02 00:13:09 [logger.py:42] Received request cmpl-01d35e99a79841e79734c0a6c9cec7c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:09 [async_llm.py:261] Added request cmpl-01d35e99a79841e79734c0a6c9cec7c7-0.
INFO 03-02 00:13:10 [logger.py:42] Received request cmpl-e2cee5b00e29440c9998ff07ee630378-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:10 [async_llm.py:261] Added request cmpl-e2cee5b00e29440c9998ff07ee630378-0.
INFO 03-02 00:13:11 [logger.py:42] Received request cmpl-ea6e1abeaad64b5394a5a25512cc78a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:11 [async_llm.py:261] Added request cmpl-ea6e1abeaad64b5394a5a25512cc78a3-0.
INFO 03-02 00:13:12 [logger.py:42] Received request cmpl-491ea025ad8b4d28bb2cd7654fac9452-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:12 [async_llm.py:261] Added request cmpl-491ea025ad8b4d28bb2cd7654fac9452-0.
INFO 03-02 00:13:13 [logger.py:42] Received request cmpl-b71f1c6a6d824ffa8ba937f7d5fe5a37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:13 [async_llm.py:261] Added request cmpl-b71f1c6a6d824ffa8ba937f7d5fe5a37-0.
INFO 03-02 00:13:14 [logger.py:42] Received request cmpl-804e2fe4e3a84b21bf5532caf6233f23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:14 [async_llm.py:261] Added request cmpl-804e2fe4e3a84b21bf5532caf6233f23-0.
INFO 03-02 00:13:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:15 [logger.py:42] Received request cmpl-4b8e694ac61444d5961535cb8d370cfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:15 [async_llm.py:261] Added request cmpl-4b8e694ac61444d5961535cb8d370cfa-0.
INFO 03-02 00:13:17 [logger.py:42] Received request cmpl-c7d3d5d78b69440d869f0e119fa53abc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:17 [async_llm.py:261] Added request cmpl-c7d3d5d78b69440d869f0e119fa53abc-0.
INFO 03-02 00:13:18 [logger.py:42] Received request cmpl-380113d3c2cd4aa481caceae1952016e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:18 [async_llm.py:261] Added request cmpl-380113d3c2cd4aa481caceae1952016e-0.
INFO 03-02 00:13:19 [logger.py:42] Received request cmpl-e4e6803caf0a43ff9a7a0d185ae14c3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:19 [async_llm.py:261] Added request cmpl-e4e6803caf0a43ff9a7a0d185ae14c3f-0.
INFO 03-02 00:13:20 [logger.py:42] Received request cmpl-6acb92d3cf6a4d7da112bec3b42392a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:20 [async_llm.py:261] Added request cmpl-6acb92d3cf6a4d7da112bec3b42392a1-0.
INFO 03-02 00:13:21 [logger.py:42] Received request cmpl-bf4ceb6d079c4a8caebe882e39f48926-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:21 [async_llm.py:261] Added request cmpl-bf4ceb6d079c4a8caebe882e39f48926-0.
INFO 03-02 00:13:22 [logger.py:42] Received request cmpl-f048482052084d7db5db662ee5fb6d1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:22 [async_llm.py:261] Added request cmpl-f048482052084d7db5db662ee5fb6d1b-0.
INFO 03-02 00:13:23 [logger.py:42] Received request cmpl-f53253f137584c26b20e3ab6c9ced737-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:23 [async_llm.py:261] Added request cmpl-f53253f137584c26b20e3ab6c9ced737-0.
INFO 03-02 00:13:24 [logger.py:42] Received request cmpl-f4db6a1c617242d9accbf10ac6f6638d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:24 [async_llm.py:261] Added request cmpl-f4db6a1c617242d9accbf10ac6f6638d-0.
INFO 03-02 00:13:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:25 [logger.py:42] Received request cmpl-3446281ac11e4acc8ae9834a72ae24b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:25 [async_llm.py:261] Added request cmpl-3446281ac11e4acc8ae9834a72ae24b3-0.
INFO 03-02 00:13:26 [logger.py:42] Received request cmpl-c82747038e024c3a943648f6db89f68e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:26 [async_llm.py:261] Added request cmpl-c82747038e024c3a943648f6db89f68e-0.
INFO 03-02 00:13:28 [logger.py:42] Received request cmpl-284ef4300fd847999a6b5edaa7413832-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:28 [async_llm.py:261] Added request cmpl-284ef4300fd847999a6b5edaa7413832-0.
INFO 03-02 00:13:29 [logger.py:42] Received request cmpl-0dea8f5262884cf3b3fbfc6607c46172-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:29 [async_llm.py:261] Added request cmpl-0dea8f5262884cf3b3fbfc6607c46172-0.
INFO 03-02 00:13:30 [logger.py:42] Received request cmpl-4cedb2c389034e419edd38cfd35ea3cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:30 [async_llm.py:261] Added request cmpl-4cedb2c389034e419edd38cfd35ea3cf-0.
INFO 03-02 00:13:31 [logger.py:42] Received request cmpl-784ab47271b8433c8dd791987b8c672b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:31 [async_llm.py:261] Added request cmpl-784ab47271b8433c8dd791987b8c672b-0.
INFO 03-02 00:13:32 [logger.py:42] Received request cmpl-902700443d5c40b1b2dee36f63712997-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:32 [async_llm.py:261] Added request cmpl-902700443d5c40b1b2dee36f63712997-0.
INFO 03-02 00:13:33 [logger.py:42] Received request cmpl-a9a1a44ab5034a97ae3c838bce1ffdae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:33 [async_llm.py:261] Added request cmpl-a9a1a44ab5034a97ae3c838bce1ffdae-0.
INFO 03-02 00:13:34 [logger.py:42] Received request cmpl-9292646a78084362b3bddd92f2be377f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:34 [async_llm.py:261] Added request cmpl-9292646a78084362b3bddd92f2be377f-0.
INFO 03-02 00:13:35 [logger.py:42] Received request cmpl-8f02370590614b52921e87b217298c9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:35 [async_llm.py:261] Added request cmpl-8f02370590614b52921e87b217298c9d-0.
INFO 03-02 00:13:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:36 [logger.py:42] Received request cmpl-a7ff0369c76e402e84f6145dad37bace-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:36 [async_llm.py:261] Added request cmpl-a7ff0369c76e402e84f6145dad37bace-0.
INFO 03-02 00:13:37 [logger.py:42] Received request cmpl-f40b06ba161f415d80cdf4841d25b280-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:37 [async_llm.py:261] Added request cmpl-f40b06ba161f415d80cdf4841d25b280-0.
INFO 03-02 00:13:38 [logger.py:42] Received request cmpl-f26b2e553cbe4ba6bec97cd6b958aa95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:38 [async_llm.py:261] Added request cmpl-f26b2e553cbe4ba6bec97cd6b958aa95-0.
INFO 03-02 00:13:40 [logger.py:42] Received request cmpl-f736ba0fe05142e18f6c8d606502cceb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:40 [async_llm.py:261] Added request cmpl-f736ba0fe05142e18f6c8d606502cceb-0.
INFO 03-02 00:13:41 [logger.py:42] Received request cmpl-328ef23875a740dfa19bc559246c08ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:41 [async_llm.py:261] Added request cmpl-328ef23875a740dfa19bc559246c08ec-0.
INFO 03-02 00:13:42 [logger.py:42] Received request cmpl-2f5b10f14b964c81b8805933afa636b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:42 [async_llm.py:261] Added request cmpl-2f5b10f14b964c81b8805933afa636b7-0.
INFO 03-02 00:13:43 [logger.py:42] Received request cmpl-70760cf8559d438bb80dc6408bdeb90c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:43 [async_llm.py:261] Added request cmpl-70760cf8559d438bb80dc6408bdeb90c-0.
INFO 03-02 00:13:44 [logger.py:42] Received request cmpl-ce3d9cfb72a84baf922d88e121b62899-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:44 [async_llm.py:261] Added request cmpl-ce3d9cfb72a84baf922d88e121b62899-0.
INFO 03-02 00:13:45 [logger.py:42] Received request cmpl-7fce7247c1544eb7b92801935f791488-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:45 [async_llm.py:261] Added request cmpl-7fce7247c1544eb7b92801935f791488-0.
INFO 03-02 00:13:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:46 [logger.py:42] Received request cmpl-3dde6351fc994bb68fec052c146b4acd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:46 [async_llm.py:261] Added request cmpl-3dde6351fc994bb68fec052c146b4acd-0.
INFO 03-02 00:13:47 [logger.py:42] Received request cmpl-b44cde18ef584c89be39e5187617e6ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:47 [async_llm.py:261] Added request cmpl-b44cde18ef584c89be39e5187617e6ee-0.
INFO 03-02 00:13:48 [logger.py:42] Received request cmpl-241de52dbe494f6488a5087d2ed1d749-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:48 [async_llm.py:261] Added request cmpl-241de52dbe494f6488a5087d2ed1d749-0.
INFO 03-02 00:13:49 [logger.py:42] Received request cmpl-10c68e8e3f254a20800380b5317e5b56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:49 [async_llm.py:261] Added request cmpl-10c68e8e3f254a20800380b5317e5b56-0.
INFO 03-02 00:13:51 [logger.py:42] Received request cmpl-6258624146064f19b588b410d2120005-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:51 [async_llm.py:261] Added request cmpl-6258624146064f19b588b410d2120005-0.
INFO 03-02 00:13:52 [logger.py:42] Received request cmpl-17035a50a05841778ca33cbf85e85dd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:52 [async_llm.py:261] Added request cmpl-17035a50a05841778ca33cbf85e85dd7-0.
INFO 03-02 00:13:53 [logger.py:42] Received request cmpl-820e93b07a394147bfcbe8193d0b71ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:53 [async_llm.py:261] Added request cmpl-820e93b07a394147bfcbe8193d0b71ad-0.
INFO 03-02 00:13:54 [logger.py:42] Received request cmpl-1aec81c428e848a6b62ca7bfde4d8fb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:54 [async_llm.py:261] Added request cmpl-1aec81c428e848a6b62ca7bfde4d8fb7-0.
INFO 03-02 00:13:55 [logger.py:42] Received request cmpl-40f8087d206f4d7eb2ccf99a1b78c77f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:55 [async_llm.py:261] Added request cmpl-40f8087d206f4d7eb2ccf99a1b78c77f-0.
INFO 03-02 00:13:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:13:56 [logger.py:42] Received request cmpl-a3a7ddbab62f48c09e350ae157e7f74e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:56 [async_llm.py:261] Added request cmpl-a3a7ddbab62f48c09e350ae157e7f74e-0.
INFO 03-02 00:13:57 [logger.py:42] Received request cmpl-989cbe24dce74177a7085b9b6c0f5216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:57 [async_llm.py:261] Added request cmpl-989cbe24dce74177a7085b9b6c0f5216-0.
INFO 03-02 00:13:58 [logger.py:42] Received request cmpl-9ca8274c22e34100991ecac2baec40be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:58 [async_llm.py:261] Added request cmpl-9ca8274c22e34100991ecac2baec40be-0.
INFO 03-02 00:13:59 [logger.py:42] Received request cmpl-28c5a4fbb13344568a18b25695fb872a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:13:59 [async_llm.py:261] Added request cmpl-28c5a4fbb13344568a18b25695fb872a-0.
INFO 03-02 00:14:00 [logger.py:42] Received request cmpl-1f79de6d69d34689a7a02af3e0b78b03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:00 [async_llm.py:261] Added request cmpl-1f79de6d69d34689a7a02af3e0b78b03-0.
INFO 03-02 00:14:01 [logger.py:42] Received request cmpl-9334faeef1c44dab97e278ca5ddc14c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:01 [async_llm.py:261] Added request cmpl-9334faeef1c44dab97e278ca5ddc14c2-0.
INFO 03-02 00:14:03 [logger.py:42] Received request cmpl-796bb454d297469ea02cf1d6822c1c78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:03 [async_llm.py:261] Added request cmpl-796bb454d297469ea02cf1d6822c1c78-0.
INFO 03-02 00:14:04 [logger.py:42] Received request cmpl-3a576c8d063343939953ae3d14f7b907-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:04 [async_llm.py:261] Added request cmpl-3a576c8d063343939953ae3d14f7b907-0.
INFO 03-02 00:14:05 [logger.py:42] Received request cmpl-f3a6f353eeb046339056dbc6db0d156a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:05 [async_llm.py:261] Added request cmpl-f3a6f353eeb046339056dbc6db0d156a-0.
INFO 03-02 00:14:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:06 [logger.py:42] Received request cmpl-dfd2626fe15342ae94c4133ee1237e90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:06 [async_llm.py:261] Added request cmpl-dfd2626fe15342ae94c4133ee1237e90-0.
INFO 03-02 00:14:07 [logger.py:42] Received request cmpl-4e1b889fff034eba868885bf192d6975-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:07 [async_llm.py:261] Added request cmpl-4e1b889fff034eba868885bf192d6975-0.
INFO 03-02 00:14:08 [logger.py:42] Received request cmpl-4c0464ce43f8478cb1f300f5c5cf3f51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:08 [async_llm.py:261] Added request cmpl-4c0464ce43f8478cb1f300f5c5cf3f51-0.
INFO 03-02 00:14:09 [logger.py:42] Received request cmpl-f85bb309334745a68be5f34e78d865f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:09 [async_llm.py:261] Added request cmpl-f85bb309334745a68be5f34e78d865f2-0.
INFO 03-02 00:14:10 [logger.py:42] Received request cmpl-bbeb6f5a3c884ab790bd2d3fc877bd29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:10 [async_llm.py:261] Added request cmpl-bbeb6f5a3c884ab790bd2d3fc877bd29-0.
INFO 03-02 00:14:11 [logger.py:42] Received request cmpl-4989a60c0f4c40dfba71b4435d25b01b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:11 [async_llm.py:261] Added request cmpl-4989a60c0f4c40dfba71b4435d25b01b-0.
INFO 03-02 00:14:12 [logger.py:42] Received request cmpl-f760e8e778704cd9b06fc0d5841af380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:12 [async_llm.py:261] Added request cmpl-f760e8e778704cd9b06fc0d5841af380-0.
INFO 03-02 00:14:14 [logger.py:42] Received request cmpl-8cf8bbb114e44b049e80c13fbc9703ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:14 [async_llm.py:261] Added request cmpl-8cf8bbb114e44b049e80c13fbc9703ec-0.
INFO 03-02 00:14:15 [logger.py:42] Received request cmpl-2c508dad8d0246efb2a80bf26c0a6884-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:15 [async_llm.py:261] Added request cmpl-2c508dad8d0246efb2a80bf26c0a6884-0.
INFO 03-02 00:14:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:16 [logger.py:42] Received request cmpl-0fadc45777044760b60a18d761a00e82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:16 [async_llm.py:261] Added request cmpl-0fadc45777044760b60a18d761a00e82-0.
INFO 03-02 00:14:17 [logger.py:42] Received request cmpl-64ec7f3cab454f32b5fe754f0ae67c09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:17 [async_llm.py:261] Added request cmpl-64ec7f3cab454f32b5fe754f0ae67c09-0.
INFO 03-02 00:14:18 [logger.py:42] Received request cmpl-65c99ab6b9324ace8df3a34bf051343d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:18 [async_llm.py:261] Added request cmpl-65c99ab6b9324ace8df3a34bf051343d-0.
INFO 03-02 00:14:19 [logger.py:42] Received request cmpl-afd67b57023e451ca44f5079ef1e6ba1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:19 [async_llm.py:261] Added request cmpl-afd67b57023e451ca44f5079ef1e6ba1-0.
INFO 03-02 00:14:20 [logger.py:42] Received request cmpl-23da98a9bf2f47b09d208ca387a36489-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:20 [async_llm.py:261] Added request cmpl-23da98a9bf2f47b09d208ca387a36489-0.
INFO 03-02 00:14:21 [logger.py:42] Received request cmpl-94e2f49bdc3d43519aa47dc35d8301d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:21 [async_llm.py:261] Added request cmpl-94e2f49bdc3d43519aa47dc35d8301d3-0.
INFO 03-02 00:14:22 [logger.py:42] Received request cmpl-2e00716fb70140bcbff80c0e761557e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:22 [async_llm.py:261] Added request cmpl-2e00716fb70140bcbff80c0e761557e9-0.
INFO 03-02 00:14:23 [logger.py:42] Received request cmpl-330d424b86ce4161bde273ca6ed2a8ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:23 [async_llm.py:261] Added request cmpl-330d424b86ce4161bde273ca6ed2a8ee-0.
INFO 03-02 00:14:25 [logger.py:42] Received request cmpl-7530a335cbcf43fbb08ce32df6f5ff6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:25 [async_llm.py:261] Added request cmpl-7530a335cbcf43fbb08ce32df6f5ff6a-0.
INFO 03-02 00:14:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:26 [logger.py:42] Received request cmpl-a548228b12e94891bae46b068e4370da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:26 [async_llm.py:261] Added request cmpl-a548228b12e94891bae46b068e4370da-0.
INFO 03-02 00:14:27 [logger.py:42] Received request cmpl-6381619b8bd644618c2f2ba61dcd0622-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:27 [async_llm.py:261] Added request cmpl-6381619b8bd644618c2f2ba61dcd0622-0.
INFO 03-02 00:14:28 [logger.py:42] Received request cmpl-b5185cbb49884df7b4506903643972b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:28 [async_llm.py:261] Added request cmpl-b5185cbb49884df7b4506903643972b4-0.
INFO 03-02 00:14:29 [logger.py:42] Received request cmpl-be0ac84e28c04393b31795aa6fe389bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:29 [async_llm.py:261] Added request cmpl-be0ac84e28c04393b31795aa6fe389bd-0.
INFO 03-02 00:14:30 [logger.py:42] Received request cmpl-d7aa94f29eca4f78a6805d1470aeaad7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:30 [async_llm.py:261] Added request cmpl-d7aa94f29eca4f78a6805d1470aeaad7-0.
INFO 03-02 00:14:31 [logger.py:42] Received request cmpl-dc917d7185994630990d78eee8658b41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:31 [async_llm.py:261] Added request cmpl-dc917d7185994630990d78eee8658b41-0.
INFO 03-02 00:14:32 [logger.py:42] Received request cmpl-bb683132f13f436882ab2bbc90681cfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:32 [async_llm.py:261] Added request cmpl-bb683132f13f436882ab2bbc90681cfe-0.
INFO 03-02 00:14:33 [logger.py:42] Received request cmpl-ed0f416354834c9ca01f7b7e14a9f5de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:33 [async_llm.py:261] Added request cmpl-ed0f416354834c9ca01f7b7e14a9f5de-0.
INFO 03-02 00:14:34 [logger.py:42] Received request cmpl-1b9400d8ce2e4ff385c2111c50dc3be4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:34 [async_llm.py:261] Added request cmpl-1b9400d8ce2e4ff385c2111c50dc3be4-0.
INFO 03-02 00:14:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:35 [logger.py:42] Received request cmpl-c08b11a43f5849b2b353028f16f04508-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:35 [async_llm.py:261] Added request cmpl-c08b11a43f5849b2b353028f16f04508-0.
INFO 03-02 00:14:37 [logger.py:42] Received request cmpl-fd99d2611e9d4ee8bb661cf2c30d8d3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:37 [async_llm.py:261] Added request cmpl-fd99d2611e9d4ee8bb661cf2c30d8d3b-0.
INFO 03-02 00:14:38 [logger.py:42] Received request cmpl-15db7f5502a047f6a2eaaeff485a79ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:38 [async_llm.py:261] Added request cmpl-15db7f5502a047f6a2eaaeff485a79ad-0.
INFO 03-02 00:14:39 [logger.py:42] Received request cmpl-e6aab3ac79a746688e19d752635cfd91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:39 [async_llm.py:261] Added request cmpl-e6aab3ac79a746688e19d752635cfd91-0.
INFO 03-02 00:14:40 [logger.py:42] Received request cmpl-b765cf5ed7464e32a65944218cd52c34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:40 [async_llm.py:261] Added request cmpl-b765cf5ed7464e32a65944218cd52c34-0.
INFO 03-02 00:14:41 [logger.py:42] Received request cmpl-6e0f8531a57f4b389363bfd1a9642d8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:41 [async_llm.py:261] Added request cmpl-6e0f8531a57f4b389363bfd1a9642d8d-0.
INFO 03-02 00:14:42 [logger.py:42] Received request cmpl-938268f6dce245d5bf58fb978443e5fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:42 [async_llm.py:261] Added request cmpl-938268f6dce245d5bf58fb978443e5fa-0.
INFO 03-02 00:14:43 [logger.py:42] Received request cmpl-8c4259a09f7b4dd6b536206c8343e559-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:43 [async_llm.py:261] Added request cmpl-8c4259a09f7b4dd6b536206c8343e559-0.
INFO 03-02 00:14:44 [logger.py:42] Received request cmpl-aba4d6003d7d44a59ed2cb439f516e96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:44 [async_llm.py:261] Added request cmpl-aba4d6003d7d44a59ed2cb439f516e96-0.
INFO 03-02 00:14:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:45 [logger.py:42] Received request cmpl-027b31b25ce1489d9e838594fbf82644-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:45 [async_llm.py:261] Added request cmpl-027b31b25ce1489d9e838594fbf82644-0.
INFO 03-02 00:14:46 [logger.py:42] Received request cmpl-6b7394349e844509aeef1c916f86f062-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:46 [async_llm.py:261] Added request cmpl-6b7394349e844509aeef1c916f86f062-0.
INFO 03-02 00:14:48 [logger.py:42] Received request cmpl-1c3b42b9286f4af8ac588f1866653c80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:48 [async_llm.py:261] Added request cmpl-1c3b42b9286f4af8ac588f1866653c80-0.
INFO 03-02 00:14:49 [logger.py:42] Received request cmpl-c194cf3f52e34252962d6399f2efc813-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:49 [async_llm.py:261] Added request cmpl-c194cf3f52e34252962d6399f2efc813-0.
INFO 03-02 00:14:50 [logger.py:42] Received request cmpl-29b9c077e0424139b145632e0c67a434-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:50 [async_llm.py:261] Added request cmpl-29b9c077e0424139b145632e0c67a434-0.
INFO 03-02 00:14:51 [logger.py:42] Received request cmpl-fb53204a13614eb6ab8a196a52468467-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:51 [async_llm.py:261] Added request cmpl-fb53204a13614eb6ab8a196a52468467-0.
INFO 03-02 00:14:52 [logger.py:42] Received request cmpl-cd5fe8edb0ac4595ab9f00921908115a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:52 [async_llm.py:261] Added request cmpl-cd5fe8edb0ac4595ab9f00921908115a-0.
INFO 03-02 00:14:53 [logger.py:42] Received request cmpl-610748a4eae2450a9ebebb7306ab53d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:53 [async_llm.py:261] Added request cmpl-610748a4eae2450a9ebebb7306ab53d9-0.
INFO 03-02 00:14:54 [logger.py:42] Received request cmpl-b841a85d9a2045069f607d0a50355afb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:54 [async_llm.py:261] Added request cmpl-b841a85d9a2045069f607d0a50355afb-0.
INFO 03-02 00:14:55 [logger.py:42] Received request cmpl-567c2be7934949adb9ab7e2b8dffb66b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:55 [async_llm.py:261] Added request cmpl-567c2be7934949adb9ab7e2b8dffb66b-0.
INFO 03-02 00:14:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:14:56 [logger.py:42] Received request cmpl-9fd87a8c407641aaa16f81becf719b5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:56 [async_llm.py:261] Added request cmpl-9fd87a8c407641aaa16f81becf719b5b-0.
INFO 03-02 00:14:57 [logger.py:42] Received request cmpl-90486b080d354f5eb59e65db44f721d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:57 [async_llm.py:261] Added request cmpl-90486b080d354f5eb59e65db44f721d3-0.
INFO 03-02 00:14:59 [logger.py:42] Received request cmpl-4e5526f8bcb349af8418583eac7d0756-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:14:59 [async_llm.py:261] Added request cmpl-4e5526f8bcb349af8418583eac7d0756-0.
INFO 03-02 00:15:00 [logger.py:42] Received request cmpl-f6b4a8e8db8141c0ab2f72297f40bab9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:00 [async_llm.py:261] Added request cmpl-f6b4a8e8db8141c0ab2f72297f40bab9-0.
INFO 03-02 00:15:01 [logger.py:42] Received request cmpl-e84b8162cd864adfbada2e119c599a82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:01 [async_llm.py:261] Added request cmpl-e84b8162cd864adfbada2e119c599a82-0.
INFO 03-02 00:15:02 [logger.py:42] Received request cmpl-21b186a6a5fc4865be78ae4e7e0ee08d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:02 [async_llm.py:261] Added request cmpl-21b186a6a5fc4865be78ae4e7e0ee08d-0.
INFO 03-02 00:15:03 [logger.py:42] Received request cmpl-44156973129a425fa7ffb59fe137ebe2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:03 [async_llm.py:261] Added request cmpl-44156973129a425fa7ffb59fe137ebe2-0.
INFO 03-02 00:15:04 [logger.py:42] Received request cmpl-99ea62657e8643128de1cda9b0465d7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:04 [async_llm.py:261] Added request cmpl-99ea62657e8643128de1cda9b0465d7d-0.
INFO 03-02 00:15:05 [logger.py:42] Received request cmpl-f1e0a73d507c4def88ba87c2f49b9c8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:05 [async_llm.py:261] Added request cmpl-f1e0a73d507c4def88ba87c2f49b9c8c-0.
INFO 03-02 00:15:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:06 [logger.py:42] Received request cmpl-fdbc62862e084eb4823b098d95d620e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:06 [async_llm.py:261] Added request cmpl-fdbc62862e084eb4823b098d95d620e9-0.
INFO 03-02 00:15:07 [logger.py:42] Received request cmpl-6e35ac4017494bd3b294bcc2aecf8bae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:07 [async_llm.py:261] Added request cmpl-6e35ac4017494bd3b294bcc2aecf8bae-0.
INFO 03-02 00:15:08 [logger.py:42] Received request cmpl-e2ae1b73569c4759b970cda47fdd36a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:08 [async_llm.py:261] Added request cmpl-e2ae1b73569c4759b970cda47fdd36a4-0.
INFO 03-02 00:15:09 [logger.py:42] Received request cmpl-4383da32d1ad49d4920e1cc302cc2426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:09 [async_llm.py:261] Added request cmpl-4383da32d1ad49d4920e1cc302cc2426-0.
INFO 03-02 00:15:11 [logger.py:42] Received request cmpl-64e85879a4f640a3b83c4f59667d0e65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:11 [async_llm.py:261] Added request cmpl-64e85879a4f640a3b83c4f59667d0e65-0.
INFO 03-02 00:15:12 [logger.py:42] Received request cmpl-4b8c50e58f7744028ce39acd14b0d855-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:12 [async_llm.py:261] Added request cmpl-4b8c50e58f7744028ce39acd14b0d855-0.
INFO 03-02 00:15:13 [logger.py:42] Received request cmpl-fea0a199f27f47dab0171fcb62d6d0a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:13 [async_llm.py:261] Added request cmpl-fea0a199f27f47dab0171fcb62d6d0a4-0.
INFO 03-02 00:15:14 [logger.py:42] Received request cmpl-58bac86037744f9eba6bec47cdcd25a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:14 [async_llm.py:261] Added request cmpl-58bac86037744f9eba6bec47cdcd25a4-0.
INFO 03-02 00:15:15 [logger.py:42] Received request cmpl-a32c1fea786b497ebe70d10af2db9d85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:15 [async_llm.py:261] Added request cmpl-a32c1fea786b497ebe70d10af2db9d85-0.
INFO 03-02 00:15:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:16 [logger.py:42] Received request cmpl-926953754ed8460e8873f96e2d7e429d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:16 [async_llm.py:261] Added request cmpl-926953754ed8460e8873f96e2d7e429d-0.
INFO 03-02 00:15:17 [logger.py:42] Received request cmpl-9ae772afa2144569bcf73f7e538b5d58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:17 [async_llm.py:261] Added request cmpl-9ae772afa2144569bcf73f7e538b5d58-0.
INFO 03-02 00:15:18 [logger.py:42] Received request cmpl-f6f1134ff07740e5b90f7dd95e28e1e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:18 [async_llm.py:261] Added request cmpl-f6f1134ff07740e5b90f7dd95e28e1e1-0.
INFO 03-02 00:15:19 [logger.py:42] Received request cmpl-a09275a116e94570adc1da8f4210c858-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:19 [async_llm.py:261] Added request cmpl-a09275a116e94570adc1da8f4210c858-0.
INFO 03-02 00:15:20 [logger.py:42] Received request cmpl-cb28f240adc34d07b59e67275380eeb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:20 [async_llm.py:261] Added request cmpl-cb28f240adc34d07b59e67275380eeb8-0.
INFO 03-02 00:15:22 [logger.py:42] Received request cmpl-63a0316331c74c5f975a7f7c5e8e2a6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:22 [async_llm.py:261] Added request cmpl-63a0316331c74c5f975a7f7c5e8e2a6d-0.
INFO 03-02 00:15:23 [logger.py:42] Received request cmpl-bd768ab0972c4b7fbadf6d90d20594a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:23 [async_llm.py:261] Added request cmpl-bd768ab0972c4b7fbadf6d90d20594a7-0.
INFO 03-02 00:15:24 [logger.py:42] Received request cmpl-b9fdfc4f97ac4ecaa49d377a90d1311a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:24 [async_llm.py:261] Added request cmpl-b9fdfc4f97ac4ecaa49d377a90d1311a-0.
INFO 03-02 00:15:25 [logger.py:42] Received request cmpl-db2c149f323c4075897e24e86d665fc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:25 [async_llm.py:261] Added request cmpl-db2c149f323c4075897e24e86d665fc1-0.
INFO 03-02 00:15:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:26 [logger.py:42] Received request cmpl-f4659cd5933340b5bbe53616d404efa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:26 [async_llm.py:261] Added request cmpl-f4659cd5933340b5bbe53616d404efa3-0.
INFO 03-02 00:15:27 [logger.py:42] Received request cmpl-f0d74c77c0ed4966be73bd27dc0b1aa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:27 [async_llm.py:261] Added request cmpl-f0d74c77c0ed4966be73bd27dc0b1aa4-0.
INFO 03-02 00:15:28 [logger.py:42] Received request cmpl-c43b4af61f9444858b9a668618f873e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:28 [async_llm.py:261] Added request cmpl-c43b4af61f9444858b9a668618f873e0-0.
INFO 03-02 00:15:29 [logger.py:42] Received request cmpl-3c34c6a149c740d9ab40762797b60bca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:29 [async_llm.py:261] Added request cmpl-3c34c6a149c740d9ab40762797b60bca-0.
INFO 03-02 00:15:30 [logger.py:42] Received request cmpl-03720cd7b8834b02aa322682bb359192-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:30 [async_llm.py:261] Added request cmpl-03720cd7b8834b02aa322682bb359192-0.
INFO 03-02 00:15:31 [logger.py:42] Received request cmpl-74f1198db29e4e76b7d27be5b8003313-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:31 [async_llm.py:261] Added request cmpl-74f1198db29e4e76b7d27be5b8003313-0.
INFO 03-02 00:15:32 [logger.py:42] Received request cmpl-fec80acb5e644236ad9f29d456790764-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:32 [async_llm.py:261] Added request cmpl-fec80acb5e644236ad9f29d456790764-0.
INFO 03-02 00:15:34 [logger.py:42] Received request cmpl-603a38a7eed14a059e4f0a2cd8c3c4e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:34 [async_llm.py:261] Added request cmpl-603a38a7eed14a059e4f0a2cd8c3c4e7-0.
INFO 03-02 00:15:35 [logger.py:42] Received request cmpl-f01db58b79ed4500860fb11745805eb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:35 [async_llm.py:261] Added request cmpl-f01db58b79ed4500860fb11745805eb2-0.
INFO 03-02 00:15:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:36 [logger.py:42] Received request cmpl-fcce5df172d24d5399cb9178c923911a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:36 [async_llm.py:261] Added request cmpl-fcce5df172d24d5399cb9178c923911a-0.
INFO 03-02 00:15:37 [logger.py:42] Received request cmpl-56b77e7051a24068941eb5f51ae5d178-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:37 [async_llm.py:261] Added request cmpl-56b77e7051a24068941eb5f51ae5d178-0.
INFO 03-02 00:15:38 [logger.py:42] Received request cmpl-8505ebe0030b4e4aae7b7b1dcdadedf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:38 [async_llm.py:261] Added request cmpl-8505ebe0030b4e4aae7b7b1dcdadedf6-0.
INFO 03-02 00:15:39 [logger.py:42] Received request cmpl-05fa7f3de37b4b309251b8515abb4b58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:39 [async_llm.py:261] Added request cmpl-05fa7f3de37b4b309251b8515abb4b58-0.
INFO 03-02 00:15:40 [logger.py:42] Received request cmpl-eb66e6818b0d49b0b9ee1eeb398b9c9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:40 [async_llm.py:261] Added request cmpl-eb66e6818b0d49b0b9ee1eeb398b9c9d-0.
INFO 03-02 00:15:41 [logger.py:42] Received request cmpl-6983c5c380804fdbbbab91354689c993-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:41 [async_llm.py:261] Added request cmpl-6983c5c380804fdbbbab91354689c993-0.
INFO 03-02 00:15:42 [logger.py:42] Received request cmpl-955e116d1e374898b1c7ab55d542fa03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:42 [async_llm.py:261] Added request cmpl-955e116d1e374898b1c7ab55d542fa03-0.
INFO 03-02 00:15:43 [logger.py:42] Received request cmpl-7cbfbe7ec8344e4694cfc4538b74579c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:43 [async_llm.py:261] Added request cmpl-7cbfbe7ec8344e4694cfc4538b74579c-0.
INFO 03-02 00:15:45 [logger.py:42] Received request cmpl-38de9922392b42209f5fa25807a87610-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:45 [async_llm.py:261] Added request cmpl-38de9922392b42209f5fa25807a87610-0.
INFO 03-02 00:15:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:46 [logger.py:42] Received request cmpl-0cd18bcd0439448e9d9f5eeb82ad6fdf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:46 [async_llm.py:261] Added request cmpl-0cd18bcd0439448e9d9f5eeb82ad6fdf-0.
INFO 03-02 00:15:47 [logger.py:42] Received request cmpl-90a3ef4d7e8d45b9beac8c2ffeee86ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:47 [async_llm.py:261] Added request cmpl-90a3ef4d7e8d45b9beac8c2ffeee86ca-0.
INFO 03-02 00:15:48 [logger.py:42] Received request cmpl-6c4009253ca845b88c6ba5d3f4dff322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:48 [async_llm.py:261] Added request cmpl-6c4009253ca845b88c6ba5d3f4dff322-0.
INFO 03-02 00:15:49 [logger.py:42] Received request cmpl-1b05cdbe800d4293a49546622ba89f73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:49 [async_llm.py:261] Added request cmpl-1b05cdbe800d4293a49546622ba89f73-0.
INFO 03-02 00:15:50 [logger.py:42] Received request cmpl-b7cfcd8e0f674de6a9f93a2eb4f08269-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:50 [async_llm.py:261] Added request cmpl-b7cfcd8e0f674de6a9f93a2eb4f08269-0.
INFO 03-02 00:15:51 [logger.py:42] Received request cmpl-af0e57d6a6e14cd3896675883c5c1fcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:51 [async_llm.py:261] Added request cmpl-af0e57d6a6e14cd3896675883c5c1fcd-0.
INFO 03-02 00:15:52 [logger.py:42] Received request cmpl-629ca0ef057246dd90f0454eca300a6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:52 [async_llm.py:261] Added request cmpl-629ca0ef057246dd90f0454eca300a6f-0.
INFO 03-02 00:15:53 [logger.py:42] Received request cmpl-82d76252ca804ebd8a073e7a1fb99f3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:53 [async_llm.py:261] Added request cmpl-82d76252ca804ebd8a073e7a1fb99f3b-0.
INFO 03-02 00:15:54 [logger.py:42] Received request cmpl-2b77e15f15214aca839f29fcd258315a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:54 [async_llm.py:261] Added request cmpl-2b77e15f15214aca839f29fcd258315a-0.
INFO 03-02 00:15:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:15:55 [logger.py:42] Received request cmpl-353084214c1f475da8a4c2772508f7bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:55 [async_llm.py:261] Added request cmpl-353084214c1f475da8a4c2772508f7bb-0.
INFO 03-02 00:15:57 [logger.py:42] Received request cmpl-ca19d24bff754484839e15ff548f290a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:57 [async_llm.py:261] Added request cmpl-ca19d24bff754484839e15ff548f290a-0.
INFO 03-02 00:15:58 [logger.py:42] Received request cmpl-7fc6217f63f44c99953060ddfea4b738-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:58 [async_llm.py:261] Added request cmpl-7fc6217f63f44c99953060ddfea4b738-0.
INFO 03-02 00:15:59 [logger.py:42] Received request cmpl-8e0f30d22e14469bad8dfde50f2a1a5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:15:59 [async_llm.py:261] Added request cmpl-8e0f30d22e14469bad8dfde50f2a1a5a-0.
INFO 03-02 00:16:00 [logger.py:42] Received request cmpl-66165609748e4947b2015372c20214e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:00 [async_llm.py:261] Added request cmpl-66165609748e4947b2015372c20214e3-0.
INFO 03-02 00:16:01 [logger.py:42] Received request cmpl-333afd33a13d4c849c266df6d8b6de5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:01 [async_llm.py:261] Added request cmpl-333afd33a13d4c849c266df6d8b6de5b-0.
INFO 03-02 00:16:02 [logger.py:42] Received request cmpl-8ae9f88d435746ac9893548725842943-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:02 [async_llm.py:261] Added request cmpl-8ae9f88d435746ac9893548725842943-0.
INFO 03-02 00:16:03 [logger.py:42] Received request cmpl-303298f9d33d4dd282a299fab02ae1a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:03 [async_llm.py:261] Added request cmpl-303298f9d33d4dd282a299fab02ae1a4-0.
INFO 03-02 00:16:04 [logger.py:42] Received request cmpl-34402ea8cb864c6dba5f108d6d6e4744-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:04 [async_llm.py:261] Added request cmpl-34402ea8cb864c6dba5f108d6d6e4744-0.
INFO 03-02 00:16:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:05 [logger.py:42] Received request cmpl-2315e3976ad84f15b3ed2c70e658d3ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:05 [async_llm.py:261] Added request cmpl-2315e3976ad84f15b3ed2c70e658d3ea-0.
INFO 03-02 00:16:06 [logger.py:42] Received request cmpl-a0abbca6f28d481c94a37ba5f0c9b606-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:06 [async_llm.py:261] Added request cmpl-a0abbca6f28d481c94a37ba5f0c9b606-0.
INFO 03-02 00:16:08 [logger.py:42] Received request cmpl-3f6ab2ca1c9c47cf99355cd280df48e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:08 [async_llm.py:261] Added request cmpl-3f6ab2ca1c9c47cf99355cd280df48e0-0.
INFO 03-02 00:16:09 [logger.py:42] Received request cmpl-b308dd23fab24e24938ac6a62c952fca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:09 [async_llm.py:261] Added request cmpl-b308dd23fab24e24938ac6a62c952fca-0.
INFO 03-02 00:16:10 [logger.py:42] Received request cmpl-2b277e0d6f304650a8cb8e02e473cc1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:10 [async_llm.py:261] Added request cmpl-2b277e0d6f304650a8cb8e02e473cc1c-0.
INFO 03-02 00:16:11 [logger.py:42] Received request cmpl-63256ec9f270470bb1c21700187cd600-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:11 [async_llm.py:261] Added request cmpl-63256ec9f270470bb1c21700187cd600-0.
INFO 03-02 00:16:12 [logger.py:42] Received request cmpl-84ef20840f32441eaa3fc7500df427af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:12 [async_llm.py:261] Added request cmpl-84ef20840f32441eaa3fc7500df427af-0.
INFO 03-02 00:16:13 [logger.py:42] Received request cmpl-8697aabac1ed42df830a2e4687a14416-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:13 [async_llm.py:261] Added request cmpl-8697aabac1ed42df830a2e4687a14416-0.
INFO 03-02 00:16:14 [logger.py:42] Received request cmpl-5ecae2876f244a5fad297cba739d8f85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:14 [async_llm.py:261] Added request cmpl-5ecae2876f244a5fad297cba739d8f85-0.
INFO 03-02 00:16:15 [logger.py:42] Received request cmpl-02e4e257f8ff44509ae770e4dce8fa78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:15 [async_llm.py:261] Added request cmpl-02e4e257f8ff44509ae770e4dce8fa78-0.
INFO 03-02 00:16:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:16 [logger.py:42] Received request cmpl-ec43286243c0425c94a47211a7020682-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:16 [async_llm.py:261] Added request cmpl-ec43286243c0425c94a47211a7020682-0.
INFO 03-02 00:16:17 [logger.py:42] Received request cmpl-df490848f9b34d15bf2c9479212347bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:17 [async_llm.py:261] Added request cmpl-df490848f9b34d15bf2c9479212347bd-0.
INFO 03-02 00:16:18 [logger.py:42] Received request cmpl-d244c6a5ed9d4de78647633d37f7550a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:18 [async_llm.py:261] Added request cmpl-d244c6a5ed9d4de78647633d37f7550a-0.
INFO 03-02 00:16:20 [logger.py:42] Received request cmpl-74a7ef975efc4f9ab238c631da988e71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:20 [async_llm.py:261] Added request cmpl-74a7ef975efc4f9ab238c631da988e71-0.
INFO 03-02 00:16:21 [logger.py:42] Received request cmpl-7836ff75141643c0a52f9ef76078112c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:21 [async_llm.py:261] Added request cmpl-7836ff75141643c0a52f9ef76078112c-0.
INFO 03-02 00:16:22 [logger.py:42] Received request cmpl-c148c1afbf464abf86c6fcc969165560-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:22 [async_llm.py:261] Added request cmpl-c148c1afbf464abf86c6fcc969165560-0.
INFO 03-02 00:16:23 [logger.py:42] Received request cmpl-a779be871eaa4616befccd0946f8f874-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:23 [async_llm.py:261] Added request cmpl-a779be871eaa4616befccd0946f8f874-0.
INFO 03-02 00:16:24 [logger.py:42] Received request cmpl-4d9afd2ad25d44e392822ba63f645f4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:24 [async_llm.py:261] Added request cmpl-4d9afd2ad25d44e392822ba63f645f4c-0.
INFO 03-02 00:16:25 [logger.py:42] Received request cmpl-275948d7de81457794c52115a4224368-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:25 [async_llm.py:261] Added request cmpl-275948d7de81457794c52115a4224368-0.
INFO 03-02 00:16:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:26 [logger.py:42] Received request cmpl-8b09f179f2fe4460b850970189eae2a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:26 [async_llm.py:261] Added request cmpl-8b09f179f2fe4460b850970189eae2a8-0.
INFO 03-02 00:16:27 [logger.py:42] Received request cmpl-437fc8a135bc4fd1897ae9b4b7ab829f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:27 [async_llm.py:261] Added request cmpl-437fc8a135bc4fd1897ae9b4b7ab829f-0.
INFO 03-02 00:16:28 [logger.py:42] Received request cmpl-cceb9f3f8cc04d859aa1517e736318f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:28 [async_llm.py:261] Added request cmpl-cceb9f3f8cc04d859aa1517e736318f9-0.
INFO 03-02 00:16:29 [logger.py:42] Received request cmpl-059590b3753e4f2ab2c2b5c4b0d10379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:29 [async_llm.py:261] Added request cmpl-059590b3753e4f2ab2c2b5c4b0d10379-0.
INFO 03-02 00:16:30 [logger.py:42] Received request cmpl-083f774adc254ed6895768d4cfb4b56d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:30 [async_llm.py:261] Added request cmpl-083f774adc254ed6895768d4cfb4b56d-0.
INFO 03-02 00:16:32 [logger.py:42] Received request cmpl-6e2856c431d14b0f919230988e65396e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:32 [async_llm.py:261] Added request cmpl-6e2856c431d14b0f919230988e65396e-0.
INFO 03-02 00:16:33 [logger.py:42] Received request cmpl-b23bd33bad48424aad669a8601890d9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:33 [async_llm.py:261] Added request cmpl-b23bd33bad48424aad669a8601890d9d-0.
INFO 03-02 00:16:34 [logger.py:42] Received request cmpl-a7f63c85ef0943ecb3ce019341c51e16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:34 [async_llm.py:261] Added request cmpl-a7f63c85ef0943ecb3ce019341c51e16-0.
INFO 03-02 00:16:35 [logger.py:42] Received request cmpl-587f7f8b51774431aafcb5eafce763be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:35 [async_llm.py:261] Added request cmpl-587f7f8b51774431aafcb5eafce763be-0.
INFO 03-02 00:16:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:36 [logger.py:42] Received request cmpl-4086f42f76c744c8bbd0816172486121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:36 [async_llm.py:261] Added request cmpl-4086f42f76c744c8bbd0816172486121-0.
INFO 03-02 00:16:37 [logger.py:42] Received request cmpl-b010de8651f342c8a5f6456537e1994c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:37 [async_llm.py:261] Added request cmpl-b010de8651f342c8a5f6456537e1994c-0.
INFO 03-02 00:16:38 [logger.py:42] Received request cmpl-af4ceea9d4b9429b90ec8c3f177f0a9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:38 [async_llm.py:261] Added request cmpl-af4ceea9d4b9429b90ec8c3f177f0a9e-0.
INFO 03-02 00:16:39 [logger.py:42] Received request cmpl-18d8734a0dce49b1861ddd73b875d2cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:39 [async_llm.py:261] Added request cmpl-18d8734a0dce49b1861ddd73b875d2cb-0.
INFO 03-02 00:16:40 [logger.py:42] Received request cmpl-e088b5a7c69b4dc5b4aa038f2ae39c6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:40 [async_llm.py:261] Added request cmpl-e088b5a7c69b4dc5b4aa038f2ae39c6a-0.
INFO 03-02 00:16:41 [logger.py:42] Received request cmpl-232783c9b2b84ea0a3887fa71b5f8915-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:41 [async_llm.py:261] Added request cmpl-232783c9b2b84ea0a3887fa71b5f8915-0.
INFO 03-02 00:16:43 [logger.py:42] Received request cmpl-2d8b908e5a9f41d0af4886f753117f79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:43 [async_llm.py:261] Added request cmpl-2d8b908e5a9f41d0af4886f753117f79-0.
INFO 03-02 00:16:44 [logger.py:42] Received request cmpl-b4317a2300604777b1be5af755d11353-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:44 [async_llm.py:261] Added request cmpl-b4317a2300604777b1be5af755d11353-0.
INFO 03-02 00:16:45 [logger.py:42] Received request cmpl-089e0e6310f34b65852926faad48047b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:45 [async_llm.py:261] Added request cmpl-089e0e6310f34b65852926faad48047b-0.
INFO 03-02 00:16:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:46 [logger.py:42] Received request cmpl-41c2a3eb6b844eed88d1cd4b1d40ac38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:46 [async_llm.py:261] Added request cmpl-41c2a3eb6b844eed88d1cd4b1d40ac38-0.
INFO 03-02 00:16:47 [logger.py:42] Received request cmpl-2082718beb11473794041b4979616748-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:47 [async_llm.py:261] Added request cmpl-2082718beb11473794041b4979616748-0.
INFO 03-02 00:16:48 [logger.py:42] Received request cmpl-a865283684064f28a89ee07a3760e4fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:48 [async_llm.py:261] Added request cmpl-a865283684064f28a89ee07a3760e4fa-0.
INFO 03-02 00:16:49 [logger.py:42] Received request cmpl-a02c1d4b8e154fea80871d03af273ade-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:49 [async_llm.py:261] Added request cmpl-a02c1d4b8e154fea80871d03af273ade-0.
INFO 03-02 00:16:50 [logger.py:42] Received request cmpl-bfc2a9dbb9874e65badde113191cef82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:50 [async_llm.py:261] Added request cmpl-bfc2a9dbb9874e65badde113191cef82-0.
INFO 03-02 00:16:51 [logger.py:42] Received request cmpl-31da524f9cef48469353585888941db1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:51 [async_llm.py:261] Added request cmpl-31da524f9cef48469353585888941db1-0.
INFO 03-02 00:16:52 [logger.py:42] Received request cmpl-63775e2e447845df927fef92af5d7e7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:52 [async_llm.py:261] Added request cmpl-63775e2e447845df927fef92af5d7e7f-0.
INFO 03-02 00:16:53 [logger.py:42] Received request cmpl-f2fcf36c9fec4d50a98fcc9d7354e943-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:53 [async_llm.py:261] Added request cmpl-f2fcf36c9fec4d50a98fcc9d7354e943-0.
INFO 03-02 00:16:55 [logger.py:42] Received request cmpl-ad3ddc984fa04663b5d9447521ebb032-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:55 [async_llm.py:261] Added request cmpl-ad3ddc984fa04663b5d9447521ebb032-0.
INFO 03-02 00:16:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:16:56 [logger.py:42] Received request cmpl-a765ceccce604c8d9c95319dd8abfa66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:56 [async_llm.py:261] Added request cmpl-a765ceccce604c8d9c95319dd8abfa66-0.
INFO 03-02 00:16:57 [logger.py:42] Received request cmpl-ed34ba90da424f99b4314a186005a147-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:57 [async_llm.py:261] Added request cmpl-ed34ba90da424f99b4314a186005a147-0.
INFO 03-02 00:16:58 [logger.py:42] Received request cmpl-2fe1066e536b42d0b9aa62c4e0229492-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:58 [async_llm.py:261] Added request cmpl-2fe1066e536b42d0b9aa62c4e0229492-0.
INFO 03-02 00:16:59 [logger.py:42] Received request cmpl-82c0485109cb4f708ad381da795eb16c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:16:59 [async_llm.py:261] Added request cmpl-82c0485109cb4f708ad381da795eb16c-0.
INFO 03-02 00:17:00 [logger.py:42] Received request cmpl-32096dde9d624d3fb08c5cde3670a628-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:00 [async_llm.py:261] Added request cmpl-32096dde9d624d3fb08c5cde3670a628-0.
INFO 03-02 00:17:01 [logger.py:42] Received request cmpl-c045640abf644701824b3478dd4f560a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:01 [async_llm.py:261] Added request cmpl-c045640abf644701824b3478dd4f560a-0.
INFO 03-02 00:17:02 [logger.py:42] Received request cmpl-026f58ad8c1f440c94e08bbbc93ade1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:02 [async_llm.py:261] Added request cmpl-026f58ad8c1f440c94e08bbbc93ade1e-0.
INFO 03-02 00:17:03 [logger.py:42] Received request cmpl-c2d16105ab0b4559ac432539bf41711c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:03 [async_llm.py:261] Added request cmpl-c2d16105ab0b4559ac432539bf41711c-0.
INFO 03-02 00:17:04 [logger.py:42] Received request cmpl-9f1e44f809574929b39b5e9b5f7801f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:04 [async_llm.py:261] Added request cmpl-9f1e44f809574929b39b5e9b5f7801f9-0.
INFO 03-02 00:17:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:06 [logger.py:42] Received request cmpl-e35baedc722045b8a80822a1bd92a0d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:06 [async_llm.py:261] Added request cmpl-e35baedc722045b8a80822a1bd92a0d9-0.
INFO 03-02 00:17:07 [logger.py:42] Received request cmpl-532e24205f6141adab2caec63301305e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:07 [async_llm.py:261] Added request cmpl-532e24205f6141adab2caec63301305e-0.
INFO 03-02 00:17:08 [logger.py:42] Received request cmpl-4ba7e4f8176940f28a12a3092a37b700-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:08 [async_llm.py:261] Added request cmpl-4ba7e4f8176940f28a12a3092a37b700-0.
INFO 03-02 00:17:09 [logger.py:42] Received request cmpl-8833bd45f9f24d769cb8ef5651089371-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:09 [async_llm.py:261] Added request cmpl-8833bd45f9f24d769cb8ef5651089371-0.
INFO 03-02 00:17:10 [logger.py:42] Received request cmpl-50eda292d4b8406381bda035855bc4fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:10 [async_llm.py:261] Added request cmpl-50eda292d4b8406381bda035855bc4fe-0.
INFO 03-02 00:17:11 [logger.py:42] Received request cmpl-38851ceb7ebe4798bc8fdc08eef13ccb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:11 [async_llm.py:261] Added request cmpl-38851ceb7ebe4798bc8fdc08eef13ccb-0.
INFO 03-02 00:17:12 [logger.py:42] Received request cmpl-7bf82c60d9164ed4a890cab728cf5688-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:12 [async_llm.py:261] Added request cmpl-7bf82c60d9164ed4a890cab728cf5688-0.
INFO 03-02 00:17:13 [logger.py:42] Received request cmpl-d8db2d1b2e8d41a6ad9f8403eeae1eec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:13 [async_llm.py:261] Added request cmpl-d8db2d1b2e8d41a6ad9f8403eeae1eec-0.
INFO 03-02 00:17:14 [logger.py:42] Received request cmpl-6595922e13f048d4af1137f6004e8925-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:14 [async_llm.py:261] Added request cmpl-6595922e13f048d4af1137f6004e8925-0.
INFO 03-02 00:17:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:15 [logger.py:42] Received request cmpl-60a4cd570a8d407ebf05f6fa72ba80d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:15 [async_llm.py:261] Added request cmpl-60a4cd570a8d407ebf05f6fa72ba80d5-0.
INFO 03-02 00:17:16 [logger.py:42] Received request cmpl-a7e29686723147cabfd0477778476844-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:17 [async_llm.py:261] Added request cmpl-a7e29686723147cabfd0477778476844-0.
INFO 03-02 00:17:18 [logger.py:42] Received request cmpl-bf31c934e8bb4d8fbaabd0e2f52707df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:18 [async_llm.py:261] Added request cmpl-bf31c934e8bb4d8fbaabd0e2f52707df-0.
INFO 03-02 00:17:19 [logger.py:42] Received request cmpl-b7f35a37e625452caac19712df84a85a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:19 [async_llm.py:261] Added request cmpl-b7f35a37e625452caac19712df84a85a-0.
INFO 03-02 00:17:20 [logger.py:42] Received request cmpl-d80808072ee447cba7b48e725957a90a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:20 [async_llm.py:261] Added request cmpl-d80808072ee447cba7b48e725957a90a-0.
INFO 03-02 00:17:21 [logger.py:42] Received request cmpl-4cf153b82dd649218f68d5171982b7c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:21 [async_llm.py:261] Added request cmpl-4cf153b82dd649218f68d5171982b7c9-0.
INFO 03-02 00:17:22 [logger.py:42] Received request cmpl-1e333f29e6694874b68d2cbdae0fbfc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:22 [async_llm.py:261] Added request cmpl-1e333f29e6694874b68d2cbdae0fbfc0-0.
INFO 03-02 00:17:23 [logger.py:42] Received request cmpl-0d674abee838446abda6913cf5fa9fa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:23 [async_llm.py:261] Added request cmpl-0d674abee838446abda6913cf5fa9fa2-0.
INFO 03-02 00:17:24 [logger.py:42] Received request cmpl-29268b3e801346778345d3fd07d3403d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:24 [async_llm.py:261] Added request cmpl-29268b3e801346778345d3fd07d3403d-0.
INFO 03-02 00:17:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:25 [logger.py:42] Received request cmpl-12bb9253947d4775b0bfcf44d76be857-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:25 [async_llm.py:261] Added request cmpl-12bb9253947d4775b0bfcf44d76be857-0.
INFO 03-02 00:17:26 [logger.py:42] Received request cmpl-5c0bf4d4b41e47a79e37b608b4e9ed7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:26 [async_llm.py:261] Added request cmpl-5c0bf4d4b41e47a79e37b608b4e9ed7a-0.
INFO 03-02 00:17:27 [logger.py:42] Received request cmpl-8aa3359b99644d6bacf09f2728917c6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:27 [async_llm.py:261] Added request cmpl-8aa3359b99644d6bacf09f2728917c6a-0.
INFO 03-02 00:17:29 [logger.py:42] Received request cmpl-dda83d4ded414857ae5c034487183ee3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:29 [async_llm.py:261] Added request cmpl-dda83d4ded414857ae5c034487183ee3-0.
INFO 03-02 00:17:30 [logger.py:42] Received request cmpl-b424a115161f44e3991695823bf2dd56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:30 [async_llm.py:261] Added request cmpl-b424a115161f44e3991695823bf2dd56-0.
INFO 03-02 00:17:31 [logger.py:42] Received request cmpl-b40c869c94a2496a91fc25c295f1dffa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:31 [async_llm.py:261] Added request cmpl-b40c869c94a2496a91fc25c295f1dffa-0.
INFO 03-02 00:17:32 [logger.py:42] Received request cmpl-beef806fecee4e27a5be7eff2b31b3b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:32 [async_llm.py:261] Added request cmpl-beef806fecee4e27a5be7eff2b31b3b0-0.
INFO 03-02 00:17:33 [logger.py:42] Received request cmpl-bd0599eeda004fa781f8461d27cbea1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:33 [async_llm.py:261] Added request cmpl-bd0599eeda004fa781f8461d27cbea1b-0.
INFO 03-02 00:17:34 [logger.py:42] Received request cmpl-273f173ce53944a6acbc843e7a8899c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:34 [async_llm.py:261] Added request cmpl-273f173ce53944a6acbc843e7a8899c1-0.
INFO 03-02 00:17:35 [logger.py:42] Received request cmpl-36033126c6be4e30a4b3c21acaeae802-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:35 [async_llm.py:261] Added request cmpl-36033126c6be4e30a4b3c21acaeae802-0.
INFO 03-02 00:17:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:36 [logger.py:42] Received request cmpl-3b5b0e61b1d24ac292bebec5cbf3abc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:36 [async_llm.py:261] Added request cmpl-3b5b0e61b1d24ac292bebec5cbf3abc0-0.
INFO 03-02 00:17:37 [logger.py:42] Received request cmpl-c47a97629b5f45be91e0668c70e4581e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:37 [async_llm.py:261] Added request cmpl-c47a97629b5f45be91e0668c70e4581e-0.
INFO 03-02 00:17:38 [logger.py:42] Received request cmpl-550085fd3ba74c19924e6ea9cdaf44b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:38 [async_llm.py:261] Added request cmpl-550085fd3ba74c19924e6ea9cdaf44b4-0.
INFO 03-02 00:17:39 [logger.py:42] Received request cmpl-a293bf2b32fb4aa0863542398b109f68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:39 [async_llm.py:261] Added request cmpl-a293bf2b32fb4aa0863542398b109f68-0.
INFO 03-02 00:17:41 [logger.py:42] Received request cmpl-fb24089389b043e38049e9f17d33a2c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:41 [async_llm.py:261] Added request cmpl-fb24089389b043e38049e9f17d33a2c5-0.
INFO 03-02 00:17:42 [logger.py:42] Received request cmpl-09a104677d404ab4924efb7f0062966e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:42 [async_llm.py:261] Added request cmpl-09a104677d404ab4924efb7f0062966e-0.
INFO 03-02 00:17:43 [logger.py:42] Received request cmpl-48d8557b3b9042af94a3117b938c417a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:43 [async_llm.py:261] Added request cmpl-48d8557b3b9042af94a3117b938c417a-0.
INFO 03-02 00:17:44 [logger.py:42] Received request cmpl-69f3371eda8d46acbdceeffebc3d67b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:44 [async_llm.py:261] Added request cmpl-69f3371eda8d46acbdceeffebc3d67b3-0.
INFO 03-02 00:17:45 [logger.py:42] Received request cmpl-36c7053707df41c2a137f77ec03cbeaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:45 [async_llm.py:261] Added request cmpl-36c7053707df41c2a137f77ec03cbeaa-0.
INFO 03-02 00:17:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:46 [logger.py:42] Received request cmpl-949732b0a07244da8c0df044a16d38ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:46 [async_llm.py:261] Added request cmpl-949732b0a07244da8c0df044a16d38ed-0.
INFO 03-02 00:17:47 [logger.py:42] Received request cmpl-fdd3ba65ce034ae7b252489b19cd7b4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:47 [async_llm.py:261] Added request cmpl-fdd3ba65ce034ae7b252489b19cd7b4c-0.
INFO 03-02 00:17:48 [logger.py:42] Received request cmpl-076d5386950d4ddb8dbb10c8dbcf6f11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:48 [async_llm.py:261] Added request cmpl-076d5386950d4ddb8dbb10c8dbcf6f11-0.
INFO 03-02 00:17:49 [logger.py:42] Received request cmpl-0fbc19ab690d424c9274179c8017f423-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:49 [async_llm.py:261] Added request cmpl-0fbc19ab690d424c9274179c8017f423-0.
INFO 03-02 00:17:50 [logger.py:42] Received request cmpl-af5658cdb72a4568b96f9562130ebec9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:50 [async_llm.py:261] Added request cmpl-af5658cdb72a4568b96f9562130ebec9-0.
INFO 03-02 00:17:52 [logger.py:42] Received request cmpl-634aeb54ee664b5597ff1bdaf4e5ddde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:52 [async_llm.py:261] Added request cmpl-634aeb54ee664b5597ff1bdaf4e5ddde-0.
INFO 03-02 00:17:53 [logger.py:42] Received request cmpl-2cfcd74141ad493d8627bb59f4f0fd49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:53 [async_llm.py:261] Added request cmpl-2cfcd74141ad493d8627bb59f4f0fd49-0.
INFO 03-02 00:17:54 [logger.py:42] Received request cmpl-eaac9e95520746dfbcd567e6ae4684e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:54 [async_llm.py:261] Added request cmpl-eaac9e95520746dfbcd567e6ae4684e6-0.
INFO 03-02 00:17:55 [logger.py:42] Received request cmpl-e0a28d1643e74039b8d0c56aa908f1ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:55 [async_llm.py:261] Added request cmpl-e0a28d1643e74039b8d0c56aa908f1ac-0.
INFO 03-02 00:17:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:17:56 [logger.py:42] Received request cmpl-5bdbe8c393bb49d2b4997436b1147aac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:56 [async_llm.py:261] Added request cmpl-5bdbe8c393bb49d2b4997436b1147aac-0.
INFO 03-02 00:17:57 [logger.py:42] Received request cmpl-975142c6fe0b4e23a2ebde713b97099e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:57 [async_llm.py:261] Added request cmpl-975142c6fe0b4e23a2ebde713b97099e-0.
INFO 03-02 00:17:58 [logger.py:42] Received request cmpl-c372d48a04784f17bc710cff98c1eefb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:58 [async_llm.py:261] Added request cmpl-c372d48a04784f17bc710cff98c1eefb-0.
INFO 03-02 00:17:59 [logger.py:42] Received request cmpl-3517c48480f5495e96e7ab1715af38d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:17:59 [async_llm.py:261] Added request cmpl-3517c48480f5495e96e7ab1715af38d3-0.
INFO 03-02 00:18:00 [logger.py:42] Received request cmpl-7706ac1665084c9d99142387fb10baa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:00 [async_llm.py:261] Added request cmpl-7706ac1665084c9d99142387fb10baa1-0.
INFO 03-02 00:18:01 [logger.py:42] Received request cmpl-8a5901212d0243698c00a4196bfa1aa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:01 [async_llm.py:261] Added request cmpl-8a5901212d0243698c00a4196bfa1aa3-0.
INFO 03-02 00:18:02 [logger.py:42] Received request cmpl-a3640d8b3cb0486491dc0736b957d32c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:02 [async_llm.py:261] Added request cmpl-a3640d8b3cb0486491dc0736b957d32c-0.
INFO 03-02 00:18:04 [logger.py:42] Received request cmpl-c5218b89efd84932b1eddeb117bf51a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:04 [async_llm.py:261] Added request cmpl-c5218b89efd84932b1eddeb117bf51a1-0.
INFO 03-02 00:18:05 [logger.py:42] Received request cmpl-2707e6b87d5d416fa0f2a803b2e6abee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:05 [async_llm.py:261] Added request cmpl-2707e6b87d5d416fa0f2a803b2e6abee-0.
INFO 03-02 00:18:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:06 [logger.py:42] Received request cmpl-a98ddc28043b45f0b1bd2e379d2fde34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:06 [async_llm.py:261] Added request cmpl-a98ddc28043b45f0b1bd2e379d2fde34-0.
INFO 03-02 00:18:07 [logger.py:42] Received request cmpl-bcd4505dbb734b8eb80775f807ebf0a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:07 [async_llm.py:261] Added request cmpl-bcd4505dbb734b8eb80775f807ebf0a4-0.
INFO 03-02 00:18:08 [logger.py:42] Received request cmpl-21fb8a19bc5a4270aa56ba1e80a25131-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:08 [async_llm.py:261] Added request cmpl-21fb8a19bc5a4270aa56ba1e80a25131-0.
INFO 03-02 00:18:09 [logger.py:42] Received request cmpl-6cc40877b261479b818be0cd4876408c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:09 [async_llm.py:261] Added request cmpl-6cc40877b261479b818be0cd4876408c-0.
INFO 03-02 00:18:10 [logger.py:42] Received request cmpl-4a3b2ba60f65488a842d26d6e2b44b2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:10 [async_llm.py:261] Added request cmpl-4a3b2ba60f65488a842d26d6e2b44b2c-0.
INFO 03-02 00:18:11 [logger.py:42] Received request cmpl-76173ceb28314ed8b8885953165b3268-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:11 [async_llm.py:261] Added request cmpl-76173ceb28314ed8b8885953165b3268-0.
INFO 03-02 00:18:12 [logger.py:42] Received request cmpl-f5d258e381234685a4eb7932ca2f212d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:12 [async_llm.py:261] Added request cmpl-f5d258e381234685a4eb7932ca2f212d-0.
INFO 03-02 00:18:13 [logger.py:42] Received request cmpl-ab4cb33df5734bac91c72852fdb2bcc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:13 [async_llm.py:261] Added request cmpl-ab4cb33df5734bac91c72852fdb2bcc3-0.
INFO 03-02 00:18:15 [logger.py:42] Received request cmpl-572364b8102a4506abf01f8b7b69f5b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:15 [async_llm.py:261] Added request cmpl-572364b8102a4506abf01f8b7b69f5b3-0.
INFO 03-02 00:18:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:16 [logger.py:42] Received request cmpl-22d9ec3edc4b49a4bee8e3686a07bd0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:16 [async_llm.py:261] Added request cmpl-22d9ec3edc4b49a4bee8e3686a07bd0e-0.
INFO 03-02 00:18:17 [logger.py:42] Received request cmpl-fee22d611d36481ebe4f9a0a0bce9a5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:17 [async_llm.py:261] Added request cmpl-fee22d611d36481ebe4f9a0a0bce9a5e-0.
INFO 03-02 00:18:18 [logger.py:42] Received request cmpl-aa25a7722e5f43838ad0429f6a43d128-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:18 [async_llm.py:261] Added request cmpl-aa25a7722e5f43838ad0429f6a43d128-0.
INFO 03-02 00:18:19 [logger.py:42] Received request cmpl-0df6bb69c1d14efd9b75499f1f2b36e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:19 [async_llm.py:261] Added request cmpl-0df6bb69c1d14efd9b75499f1f2b36e7-0.
INFO 03-02 00:18:20 [logger.py:42] Received request cmpl-a8ac58ffe45f4f47b8c8790d4a89565a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:20 [async_llm.py:261] Added request cmpl-a8ac58ffe45f4f47b8c8790d4a89565a-0.
INFO 03-02 00:18:21 [logger.py:42] Received request cmpl-282c3938713c4c32b84bb837648c41f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:21 [async_llm.py:261] Added request cmpl-282c3938713c4c32b84bb837648c41f9-0.
INFO 03-02 00:18:22 [logger.py:42] Received request cmpl-ad53e5a7b55547a79b470710b215a232-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:22 [async_llm.py:261] Added request cmpl-ad53e5a7b55547a79b470710b215a232-0.
INFO 03-02 00:18:23 [logger.py:42] Received request cmpl-70546a809818405b873f61ba5ad9b978-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:23 [async_llm.py:261] Added request cmpl-70546a809818405b873f61ba5ad9b978-0.
INFO 03-02 00:18:24 [logger.py:42] Received request cmpl-4b01f012d2294afda5c44242158a0cfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:24 [async_llm.py:261] Added request cmpl-4b01f012d2294afda5c44242158a0cfc-0.
INFO 03-02 00:18:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:26 [logger.py:42] Received request cmpl-85cf7770a9e44a00b67c74572b5b8928-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:26 [async_llm.py:261] Added request cmpl-85cf7770a9e44a00b67c74572b5b8928-0.
INFO 03-02 00:18:27 [logger.py:42] Received request cmpl-2318a73a780c4fd586039f4f1f58e596-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:27 [async_llm.py:261] Added request cmpl-2318a73a780c4fd586039f4f1f58e596-0.
INFO 03-02 00:18:28 [logger.py:42] Received request cmpl-e27c837430594f3eb5975feaf2f7ebbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:28 [async_llm.py:261] Added request cmpl-e27c837430594f3eb5975feaf2f7ebbd-0.
INFO 03-02 00:18:29 [logger.py:42] Received request cmpl-95e4b05734434acca3421707e06bb5e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:29 [async_llm.py:261] Added request cmpl-95e4b05734434acca3421707e06bb5e6-0.
INFO 03-02 00:18:30 [logger.py:42] Received request cmpl-270ef6079d974c3e9866f622130089e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:30 [async_llm.py:261] Added request cmpl-270ef6079d974c3e9866f622130089e3-0.
INFO 03-02 00:18:31 [logger.py:42] Received request cmpl-16d07d78114642c196b169ace3fb577e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:31 [async_llm.py:261] Added request cmpl-16d07d78114642c196b169ace3fb577e-0.
INFO 03-02 00:18:32 [logger.py:42] Received request cmpl-d371e0e7b53943fc94aeb38d186b33a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:32 [async_llm.py:261] Added request cmpl-d371e0e7b53943fc94aeb38d186b33a2-0.
INFO 03-02 00:18:33 [logger.py:42] Received request cmpl-cb6692bb83c945d290eba05cb1c430d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:33 [async_llm.py:261] Added request cmpl-cb6692bb83c945d290eba05cb1c430d8-0.
INFO 03-02 00:18:34 [logger.py:42] Received request cmpl-6d8503e8ae80482fb791635e74e58d1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:34 [async_llm.py:261] Added request cmpl-6d8503e8ae80482fb791635e74e58d1c-0.
INFO 03-02 00:18:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:35 [logger.py:42] Received request cmpl-433eb788102b4c8186a7d7b54f71ef26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:35 [async_llm.py:261] Added request cmpl-433eb788102b4c8186a7d7b54f71ef26-0.
INFO 03-02 00:18:37 [logger.py:42] Received request cmpl-0c3c35010ff849a9b717904930eef3b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:37 [async_llm.py:261] Added request cmpl-0c3c35010ff849a9b717904930eef3b6-0.
INFO 03-02 00:18:38 [logger.py:42] Received request cmpl-f9df385cb0764d7c99b264496cdd8048-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:38 [async_llm.py:261] Added request cmpl-f9df385cb0764d7c99b264496cdd8048-0.
INFO 03-02 00:18:39 [logger.py:42] Received request cmpl-4fc999b7b03749f09a094d4959fef0e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:39 [async_llm.py:261] Added request cmpl-4fc999b7b03749f09a094d4959fef0e4-0.
INFO 03-02 00:18:40 [logger.py:42] Received request cmpl-555e353ce5d141329da04edc2c91920e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:40 [async_llm.py:261] Added request cmpl-555e353ce5d141329da04edc2c91920e-0.
INFO 03-02 00:18:41 [logger.py:42] Received request cmpl-fce61a4563aa4d2b8dae3038d03ed8eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:41 [async_llm.py:261] Added request cmpl-fce61a4563aa4d2b8dae3038d03ed8eb-0.
INFO 03-02 00:18:42 [logger.py:42] Received request cmpl-57880adbea0e4f17b36eba7788afa00a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:42 [async_llm.py:261] Added request cmpl-57880adbea0e4f17b36eba7788afa00a-0.
INFO 03-02 00:18:43 [logger.py:42] Received request cmpl-66b4fab03549499d90033bf61f8cf0cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:43 [async_llm.py:261] Added request cmpl-66b4fab03549499d90033bf61f8cf0cc-0.
INFO 03-02 00:18:44 [logger.py:42] Received request cmpl-3a356339a0394709be539a5d7897bdc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:44 [async_llm.py:261] Added request cmpl-3a356339a0394709be539a5d7897bdc7-0.
INFO 03-02 00:18:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:45 [logger.py:42] Received request cmpl-a357e2b596a84f468ec31814f8ead375-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:45 [async_llm.py:261] Added request cmpl-a357e2b596a84f468ec31814f8ead375-0.
INFO 03-02 00:18:46 [logger.py:42] Received request cmpl-a282d4b90f4d4ae9ba457e4ea8134533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:46 [async_llm.py:261] Added request cmpl-a282d4b90f4d4ae9ba457e4ea8134533-0.
INFO 03-02 00:18:47 [logger.py:42] Received request cmpl-d59dda6921e2457a8437316d78fa3db3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:47 [async_llm.py:261] Added request cmpl-d59dda6921e2457a8437316d78fa3db3-0.
INFO 03-02 00:18:49 [logger.py:42] Received request cmpl-2c0e0e2e51be4a699fa1829e397c71b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:49 [async_llm.py:261] Added request cmpl-2c0e0e2e51be4a699fa1829e397c71b2-0.
INFO 03-02 00:18:50 [logger.py:42] Received request cmpl-77397244089d430997dedc043158160a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:50 [async_llm.py:261] Added request cmpl-77397244089d430997dedc043158160a-0.
INFO 03-02 00:18:51 [logger.py:42] Received request cmpl-7e08c3e302e744578c569f6dfeea1afd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:51 [async_llm.py:261] Added request cmpl-7e08c3e302e744578c569f6dfeea1afd-0.
INFO 03-02 00:18:52 [logger.py:42] Received request cmpl-e80143ea6f884534af3d46d4902ce5f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:52 [async_llm.py:261] Added request cmpl-e80143ea6f884534af3d46d4902ce5f7-0.
INFO 03-02 00:18:53 [logger.py:42] Received request cmpl-092b5253d7074a678ad2b8a64e328a27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:53 [async_llm.py:261] Added request cmpl-092b5253d7074a678ad2b8a64e328a27-0.
INFO 03-02 00:18:54 [logger.py:42] Received request cmpl-2b2f6f5d349545d38a72960f3f9102e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:54 [async_llm.py:261] Added request cmpl-2b2f6f5d349545d38a72960f3f9102e2-0.
INFO 03-02 00:18:55 [logger.py:42] Received request cmpl-9e4841188a594c788ebc79cf3cb8a149-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:55 [async_llm.py:261] Added request cmpl-9e4841188a594c788ebc79cf3cb8a149-0.
INFO 03-02 00:18:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:18:56 [logger.py:42] Received request cmpl-51ec4b55e1e044679e639987113ea0e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:56 [async_llm.py:261] Added request cmpl-51ec4b55e1e044679e639987113ea0e8-0.
INFO 03-02 00:18:57 [logger.py:42] Received request cmpl-da5f690c4fe34be2a641e06c9c034a9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:57 [async_llm.py:261] Added request cmpl-da5f690c4fe34be2a641e06c9c034a9f-0.
INFO 03-02 00:18:58 [logger.py:42] Received request cmpl-bd40e32746c8466d9696b63028637fc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:18:58 [async_llm.py:261] Added request cmpl-bd40e32746c8466d9696b63028637fc3-0.
INFO 03-02 00:19:00 [logger.py:42] Received request cmpl-6149dc13c8b74f4ba97ce2084811a9be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:00 [async_llm.py:261] Added request cmpl-6149dc13c8b74f4ba97ce2084811a9be-0.
INFO 03-02 00:19:01 [logger.py:42] Received request cmpl-135bb06e20674711b3e240fcfe06b4db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:01 [async_llm.py:261] Added request cmpl-135bb06e20674711b3e240fcfe06b4db-0.
INFO 03-02 00:19:02 [logger.py:42] Received request cmpl-46f6f162ba6b4a1e991fa17a981921d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:02 [async_llm.py:261] Added request cmpl-46f6f162ba6b4a1e991fa17a981921d7-0.
INFO 03-02 00:19:03 [logger.py:42] Received request cmpl-41effba7316d44fd97caea7c4fd2b386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:03 [async_llm.py:261] Added request cmpl-41effba7316d44fd97caea7c4fd2b386-0.
INFO 03-02 00:19:04 [logger.py:42] Received request cmpl-4e4d87ccbf1a4eb594de8cc90af85285-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:04 [async_llm.py:261] Added request cmpl-4e4d87ccbf1a4eb594de8cc90af85285-0.
INFO 03-02 00:19:05 [logger.py:42] Received request cmpl-f29bb7d9e55b4f80969cdc7526b4edad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:05 [async_llm.py:261] Added request cmpl-f29bb7d9e55b4f80969cdc7526b4edad-0.
INFO 03-02 00:19:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:06 [logger.py:42] Received request cmpl-147abf8ddd6b4e91b19572236359ea54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:06 [async_llm.py:261] Added request cmpl-147abf8ddd6b4e91b19572236359ea54-0.
INFO 03-02 00:19:07 [logger.py:42] Received request cmpl-7611689f814b42cd97d2ccdc5edc3f33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:07 [async_llm.py:261] Added request cmpl-7611689f814b42cd97d2ccdc5edc3f33-0.
INFO 03-02 00:19:08 [logger.py:42] Received request cmpl-31caa3fdedba48ef97644d7d51e12e4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:08 [async_llm.py:261] Added request cmpl-31caa3fdedba48ef97644d7d51e12e4a-0.
INFO 03-02 00:19:09 [logger.py:42] Received request cmpl-8a3b5afb41614cb689c3e0e54df6003a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:09 [async_llm.py:261] Added request cmpl-8a3b5afb41614cb689c3e0e54df6003a-0.
INFO 03-02 00:19:10 [logger.py:42] Received request cmpl-f0effaf03b31484baea7489c6e8a272c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:10 [async_llm.py:261] Added request cmpl-f0effaf03b31484baea7489c6e8a272c-0.
INFO 03-02 00:19:12 [logger.py:42] Received request cmpl-0beb809354d54a14ba528244c83bfc5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:12 [async_llm.py:261] Added request cmpl-0beb809354d54a14ba528244c83bfc5f-0.
INFO 03-02 00:19:13 [logger.py:42] Received request cmpl-7992101fb137407586105e37819efd1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:13 [async_llm.py:261] Added request cmpl-7992101fb137407586105e37819efd1c-0.
INFO 03-02 00:19:14 [logger.py:42] Received request cmpl-edec517293d041aeb44ac87646e958ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:14 [async_llm.py:261] Added request cmpl-edec517293d041aeb44ac87646e958ed-0.
INFO 03-02 00:19:15 [logger.py:42] Received request cmpl-4af850b2f5ea40c7a03a27ec2ca67543-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:15 [async_llm.py:261] Added request cmpl-4af850b2f5ea40c7a03a27ec2ca67543-0.
INFO 03-02 00:19:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:16 [logger.py:42] Received request cmpl-8015ac1fe39f4c04841f58ab50e0ee10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:16 [async_llm.py:261] Added request cmpl-8015ac1fe39f4c04841f58ab50e0ee10-0.
INFO 03-02 00:19:17 [logger.py:42] Received request cmpl-3b67778529bd488a9004b3862816078c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:17 [async_llm.py:261] Added request cmpl-3b67778529bd488a9004b3862816078c-0.
INFO 03-02 00:19:18 [logger.py:42] Received request cmpl-4cc50d8c943142068217c648d0b9392e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:18 [async_llm.py:261] Added request cmpl-4cc50d8c943142068217c648d0b9392e-0.
INFO 03-02 00:19:19 [logger.py:42] Received request cmpl-de2c158031d74b73b6dae26f98421b11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:19 [async_llm.py:261] Added request cmpl-de2c158031d74b73b6dae26f98421b11-0.
INFO 03-02 00:19:20 [logger.py:42] Received request cmpl-d46a71b3951c4fc287139c4bb168a870-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:20 [async_llm.py:261] Added request cmpl-d46a71b3951c4fc287139c4bb168a870-0.
INFO 03-02 00:19:21 [logger.py:42] Received request cmpl-fe31499511bc40c7a4d9946762292aef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:21 [async_llm.py:261] Added request cmpl-fe31499511bc40c7a4d9946762292aef-0.
INFO 03-02 00:19:23 [logger.py:42] Received request cmpl-f41746e0ef7a4f3099bf2626a1a24eea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:23 [async_llm.py:261] Added request cmpl-f41746e0ef7a4f3099bf2626a1a24eea-0.
INFO 03-02 00:19:24 [logger.py:42] Received request cmpl-47f34586bb3f4ec184da6aea9c348ec8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:24 [async_llm.py:261] Added request cmpl-47f34586bb3f4ec184da6aea9c348ec8-0.
INFO 03-02 00:19:25 [logger.py:42] Received request cmpl-f3e684b701dc412785e58947b05d3a62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:25 [async_llm.py:261] Added request cmpl-f3e684b701dc412785e58947b05d3a62-0.
INFO 03-02 00:19:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:26 [logger.py:42] Received request cmpl-a0eb0cded5334632828c5256ea3ee9ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:26 [async_llm.py:261] Added request cmpl-a0eb0cded5334632828c5256ea3ee9ee-0.
INFO 03-02 00:19:27 [logger.py:42] Received request cmpl-87891e20d27a412a889c6bb70603914f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:27 [async_llm.py:261] Added request cmpl-87891e20d27a412a889c6bb70603914f-0.
INFO 03-02 00:19:28 [logger.py:42] Received request cmpl-3afe7c557c3442db8e8591c5e4a032d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:28 [async_llm.py:261] Added request cmpl-3afe7c557c3442db8e8591c5e4a032d2-0.
INFO 03-02 00:19:29 [logger.py:42] Received request cmpl-d347905481a44767aaaf4d048e0ed3b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:29 [async_llm.py:261] Added request cmpl-d347905481a44767aaaf4d048e0ed3b0-0.
INFO 03-02 00:19:30 [logger.py:42] Received request cmpl-e11d3a85e65e4e9bbeb16ec1a68b8b5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:30 [async_llm.py:261] Added request cmpl-e11d3a85e65e4e9bbeb16ec1a68b8b5c-0.
INFO 03-02 00:19:31 [logger.py:42] Received request cmpl-a2ee9ea238884fedb9679a83b418664f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:31 [async_llm.py:261] Added request cmpl-a2ee9ea238884fedb9679a83b418664f-0.
INFO 03-02 00:19:32 [logger.py:42] Received request cmpl-cf898f32864847abbf71542110d57322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:32 [async_llm.py:261] Added request cmpl-cf898f32864847abbf71542110d57322-0.
INFO 03-02 00:19:33 [logger.py:42] Received request cmpl-9b431a8c08f0454790373afe881c3e4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:33 [async_llm.py:261] Added request cmpl-9b431a8c08f0454790373afe881c3e4e-0.
INFO 03-02 00:19:35 [logger.py:42] Received request cmpl-4b6599652a6649f4b10e377e1fcb437a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:35 [async_llm.py:261] Added request cmpl-4b6599652a6649f4b10e377e1fcb437a-0.
INFO 03-02 00:19:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:36 [logger.py:42] Received request cmpl-a65372d7575d49ed906e8c303aec65f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:36 [async_llm.py:261] Added request cmpl-a65372d7575d49ed906e8c303aec65f7-0.
INFO 03-02 00:19:37 [logger.py:42] Received request cmpl-00f2d1e0c69a46d5afe8c3325c191646-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:37 [async_llm.py:261] Added request cmpl-00f2d1e0c69a46d5afe8c3325c191646-0.
INFO 03-02 00:19:38 [logger.py:42] Received request cmpl-078fab9c0f2b41f28c81bbff2525e773-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:38 [async_llm.py:261] Added request cmpl-078fab9c0f2b41f28c81bbff2525e773-0.
INFO 03-02 00:19:39 [logger.py:42] Received request cmpl-39fea9fcb9fb4677b98ae2a5a76ede98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:39 [async_llm.py:261] Added request cmpl-39fea9fcb9fb4677b98ae2a5a76ede98-0.
INFO 03-02 00:19:40 [logger.py:42] Received request cmpl-a17cf9ea72ee48718b4a4ed31c10a8eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:40 [async_llm.py:261] Added request cmpl-a17cf9ea72ee48718b4a4ed31c10a8eb-0.
INFO 03-02 00:19:41 [logger.py:42] Received request cmpl-01f8e97168fa43f6a1d5f3804e51459d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:41 [async_llm.py:261] Added request cmpl-01f8e97168fa43f6a1d5f3804e51459d-0.
INFO 03-02 00:19:42 [logger.py:42] Received request cmpl-ba61df38fa664bed87c2f83db2b2f686-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:42 [async_llm.py:261] Added request cmpl-ba61df38fa664bed87c2f83db2b2f686-0.
INFO 03-02 00:19:43 [logger.py:42] Received request cmpl-68c0716ffc7d4f2db4bb44ef3c9772f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:43 [async_llm.py:261] Added request cmpl-68c0716ffc7d4f2db4bb44ef3c9772f2-0.
INFO 03-02 00:19:44 [logger.py:42] Received request cmpl-6e74663d5e494f99b3a2146de2887c93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:44 [async_llm.py:261] Added request cmpl-6e74663d5e494f99b3a2146de2887c93-0.
INFO 03-02 00:19:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:46 [logger.py:42] Received request cmpl-ff6580ed7a5f47d49cc92a86b7a721dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:46 [async_llm.py:261] Added request cmpl-ff6580ed7a5f47d49cc92a86b7a721dc-0.
INFO 03-02 00:19:47 [logger.py:42] Received request cmpl-6ab36ab2747a4eca9eaa24aed40ebf53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:47 [async_llm.py:261] Added request cmpl-6ab36ab2747a4eca9eaa24aed40ebf53-0.
INFO 03-02 00:19:48 [logger.py:42] Received request cmpl-648bcbd92808492486924b76f68fcd7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:48 [async_llm.py:261] Added request cmpl-648bcbd92808492486924b76f68fcd7e-0.
INFO 03-02 00:19:49 [logger.py:42] Received request cmpl-88645682b03a4ac3a517e4f7382c21e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:49 [async_llm.py:261] Added request cmpl-88645682b03a4ac3a517e4f7382c21e9-0.
INFO 03-02 00:19:50 [logger.py:42] Received request cmpl-b1160bdbfac84627ad13c3eff62b0924-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:50 [async_llm.py:261] Added request cmpl-b1160bdbfac84627ad13c3eff62b0924-0.
INFO 03-02 00:19:51 [logger.py:42] Received request cmpl-33e889a95ccd4b32a70625f99944d0c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:51 [async_llm.py:261] Added request cmpl-33e889a95ccd4b32a70625f99944d0c0-0.
INFO 03-02 00:19:52 [logger.py:42] Received request cmpl-9d1eefebcf1e427f89b364afb2cd58e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:52 [async_llm.py:261] Added request cmpl-9d1eefebcf1e427f89b364afb2cd58e7-0.
INFO 03-02 00:19:53 [logger.py:42] Received request cmpl-30d21b22cc4e426582b53a7c8fefe56b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:53 [async_llm.py:261] Added request cmpl-30d21b22cc4e426582b53a7c8fefe56b-0.
INFO 03-02 00:19:54 [logger.py:42] Received request cmpl-3b5b08918fa745c0a407bf6872840fee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:54 [async_llm.py:261] Added request cmpl-3b5b08918fa745c0a407bf6872840fee-0.
INFO 03-02 00:19:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:19:55 [logger.py:42] Received request cmpl-40361fe8378e42909d513d8a94b25f7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:55 [async_llm.py:261] Added request cmpl-40361fe8378e42909d513d8a94b25f7f-0.
INFO 03-02 00:19:56 [logger.py:42] Received request cmpl-866cd77b1d204ef4a1ef04ce0d6631fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:56 [async_llm.py:261] Added request cmpl-866cd77b1d204ef4a1ef04ce0d6631fe-0.
INFO 03-02 00:19:58 [logger.py:42] Received request cmpl-e35aaddb3de64be4923774338d0012ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:58 [async_llm.py:261] Added request cmpl-e35aaddb3de64be4923774338d0012ef-0.
INFO 03-02 00:19:59 [logger.py:42] Received request cmpl-4c2c56be35fc4e4b962ef6730bcf94ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:19:59 [async_llm.py:261] Added request cmpl-4c2c56be35fc4e4b962ef6730bcf94ea-0.
INFO 03-02 00:20:00 [logger.py:42] Received request cmpl-b7658130ff1740f29fc001ed78c54960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:00 [async_llm.py:261] Added request cmpl-b7658130ff1740f29fc001ed78c54960-0.
INFO 03-02 00:20:01 [logger.py:42] Received request cmpl-fb682052aaf94ecea2252adbc98663be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:01 [async_llm.py:261] Added request cmpl-fb682052aaf94ecea2252adbc98663be-0.
INFO 03-02 00:20:02 [logger.py:42] Received request cmpl-27c5182b339c49359a21518660dbbfc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:02 [async_llm.py:261] Added request cmpl-27c5182b339c49359a21518660dbbfc9-0.
INFO 03-02 00:20:03 [logger.py:42] Received request cmpl-725d1707e15342548967f493e663f835-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:03 [async_llm.py:261] Added request cmpl-725d1707e15342548967f493e663f835-0.
INFO 03-02 00:20:04 [logger.py:42] Received request cmpl-65cbd9d04e314c49b29d3dad2b8c2d6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:04 [async_llm.py:261] Added request cmpl-65cbd9d04e314c49b29d3dad2b8c2d6f-0.
INFO 03-02 00:20:05 [logger.py:42] Received request cmpl-a999b7d6fe3e47ba8892597cf2e5c24c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:05 [async_llm.py:261] Added request cmpl-a999b7d6fe3e47ba8892597cf2e5c24c-0.
INFO 03-02 00:20:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:06 [logger.py:42] Received request cmpl-71db559644e945cfbb8441c004433f06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:06 [async_llm.py:261] Added request cmpl-71db559644e945cfbb8441c004433f06-0.
INFO 03-02 00:20:07 [logger.py:42] Received request cmpl-74bad1ef279c420185e09c134178e2c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:07 [async_llm.py:261] Added request cmpl-74bad1ef279c420185e09c134178e2c8-0.
INFO 03-02 00:20:09 [logger.py:42] Received request cmpl-0d2b81af909a40069d57916343863b50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:09 [async_llm.py:261] Added request cmpl-0d2b81af909a40069d57916343863b50-0.
INFO 03-02 00:20:10 [logger.py:42] Received request cmpl-edcb3b41c54346108329c41d904e8fd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:10 [async_llm.py:261] Added request cmpl-edcb3b41c54346108329c41d904e8fd3-0.
INFO 03-02 00:20:11 [logger.py:42] Received request cmpl-59d92bf358ad406791f0a094e9f8a4f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:11 [async_llm.py:261] Added request cmpl-59d92bf358ad406791f0a094e9f8a4f2-0.
INFO 03-02 00:20:12 [logger.py:42] Received request cmpl-1996aa725cf646929cc6a5352f003d4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:12 [async_llm.py:261] Added request cmpl-1996aa725cf646929cc6a5352f003d4a-0.
INFO 03-02 00:20:13 [logger.py:42] Received request cmpl-43202d2a8de14b69b643d1ec69db23bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:13 [async_llm.py:261] Added request cmpl-43202d2a8de14b69b643d1ec69db23bb-0.
INFO 03-02 00:20:14 [logger.py:42] Received request cmpl-7f10f0247ea54396977340f9d078206f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:14 [async_llm.py:261] Added request cmpl-7f10f0247ea54396977340f9d078206f-0.
INFO 03-02 00:20:15 [logger.py:42] Received request cmpl-84ea98186769405cbcc4a7d92d4764a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:15 [async_llm.py:261] Added request cmpl-84ea98186769405cbcc4a7d92d4764a0-0.
INFO 03-02 00:20:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:16 [logger.py:42] Received request cmpl-1035cc43b2af49b7a404f5f447c7d205-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:16 [async_llm.py:261] Added request cmpl-1035cc43b2af49b7a404f5f447c7d205-0.
INFO 03-02 00:20:17 [logger.py:42] Received request cmpl-8268c543a0794e4cb0ddc2c59f84604f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:17 [async_llm.py:261] Added request cmpl-8268c543a0794e4cb0ddc2c59f84604f-0.
INFO 03-02 00:20:18 [logger.py:42] Received request cmpl-ecabe9ab46c4464a8c2dbc7e67d42e25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:18 [async_llm.py:261] Added request cmpl-ecabe9ab46c4464a8c2dbc7e67d42e25-0.
INFO 03-02 00:20:19 [logger.py:42] Received request cmpl-8e22be2a74f34aa1a0c6486be4725d7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:19 [async_llm.py:261] Added request cmpl-8e22be2a74f34aa1a0c6486be4725d7c-0.
INFO 03-02 00:20:21 [logger.py:42] Received request cmpl-ee8ef8653f3b4fa796b8aae89c5293a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:21 [async_llm.py:261] Added request cmpl-ee8ef8653f3b4fa796b8aae89c5293a9-0.
INFO 03-02 00:20:22 [logger.py:42] Received request cmpl-4a298431ace04ff9805a628fa78dea5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:22 [async_llm.py:261] Added request cmpl-4a298431ace04ff9805a628fa78dea5a-0.
INFO 03-02 00:20:23 [logger.py:42] Received request cmpl-5e44165923d64d6fbf78ea5b55c57e14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:23 [async_llm.py:261] Added request cmpl-5e44165923d64d6fbf78ea5b55c57e14-0.
INFO 03-02 00:20:24 [logger.py:42] Received request cmpl-e5925438549045e88f5f73f2b194fa5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:24 [async_llm.py:261] Added request cmpl-e5925438549045e88f5f73f2b194fa5b-0.
INFO 03-02 00:20:25 [logger.py:42] Received request cmpl-8f058ed10cce4fcc9854af7a72b41d79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:25 [async_llm.py:261] Added request cmpl-8f058ed10cce4fcc9854af7a72b41d79-0.
INFO 03-02 00:20:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:26 [logger.py:42] Received request cmpl-8aeb5b888086401980e831ba6f70a096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:26 [async_llm.py:261] Added request cmpl-8aeb5b888086401980e831ba6f70a096-0.
INFO 03-02 00:20:27 [logger.py:42] Received request cmpl-817732bdcd2b494fad3e731a099aec4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:27 [async_llm.py:261] Added request cmpl-817732bdcd2b494fad3e731a099aec4d-0.
INFO 03-02 00:20:28 [logger.py:42] Received request cmpl-f590b9391572445294c7cd81fee8384d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:28 [async_llm.py:261] Added request cmpl-f590b9391572445294c7cd81fee8384d-0.
INFO 03-02 00:20:29 [logger.py:42] Received request cmpl-bd1405326ee246819175d18be9166b72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:29 [async_llm.py:261] Added request cmpl-bd1405326ee246819175d18be9166b72-0.
INFO 03-02 00:20:30 [logger.py:42] Received request cmpl-77b301c14a3b4a4399e1f76feccb3ecf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:30 [async_llm.py:261] Added request cmpl-77b301c14a3b4a4399e1f76feccb3ecf-0.
INFO 03-02 00:20:32 [logger.py:42] Received request cmpl-5ac2722152aa4d22ad3cc7c551238577-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:32 [async_llm.py:261] Added request cmpl-5ac2722152aa4d22ad3cc7c551238577-0.
INFO 03-02 00:20:33 [logger.py:42] Received request cmpl-4e322eae8fa04a9f98117f7e6b1f89be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:33 [async_llm.py:261] Added request cmpl-4e322eae8fa04a9f98117f7e6b1f89be-0.
INFO 03-02 00:20:34 [logger.py:42] Received request cmpl-9dd4c7a29f584bb69dadca44ce17194b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:34 [async_llm.py:261] Added request cmpl-9dd4c7a29f584bb69dadca44ce17194b-0.
INFO 03-02 00:20:35 [logger.py:42] Received request cmpl-9acf31e798b343d697e4d46c53822fd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:35 [async_llm.py:261] Added request cmpl-9acf31e798b343d697e4d46c53822fd4-0.
INFO 03-02 00:20:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:36 [logger.py:42] Received request cmpl-fcebe53f122f443281dedd3f64530232-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:36 [async_llm.py:261] Added request cmpl-fcebe53f122f443281dedd3f64530232-0.
INFO 03-02 00:20:37 [logger.py:42] Received request cmpl-9a682fe196d54b698564161ca6335df5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:37 [async_llm.py:261] Added request cmpl-9a682fe196d54b698564161ca6335df5-0.
INFO 03-02 00:20:38 [logger.py:42] Received request cmpl-3531148302fc4a57917e56c6221d1a00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:38 [async_llm.py:261] Added request cmpl-3531148302fc4a57917e56c6221d1a00-0.
INFO 03-02 00:20:39 [logger.py:42] Received request cmpl-f4f0e68d70af4bcdb16b08d2dc1db392-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:39 [async_llm.py:261] Added request cmpl-f4f0e68d70af4bcdb16b08d2dc1db392-0.
INFO 03-02 00:20:40 [logger.py:42] Received request cmpl-72b254c3a2a4415d9fe6cdee0767569c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:40 [async_llm.py:261] Added request cmpl-72b254c3a2a4415d9fe6cdee0767569c-0.
INFO 03-02 00:20:41 [logger.py:42] Received request cmpl-bced07ddc8474142aa775be22b9b3684-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:41 [async_llm.py:261] Added request cmpl-bced07ddc8474142aa775be22b9b3684-0.
INFO 03-02 00:20:42 [logger.py:42] Received request cmpl-16c852b4d4e9463ba06fd1640c86e552-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:42 [async_llm.py:261] Added request cmpl-16c852b4d4e9463ba06fd1640c86e552-0.
INFO 03-02 00:20:44 [logger.py:42] Received request cmpl-d4583003daeb4f1696b52958a58637fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:44 [async_llm.py:261] Added request cmpl-d4583003daeb4f1696b52958a58637fb-0.
INFO 03-02 00:20:45 [logger.py:42] Received request cmpl-78e0c3a088a84f468b06753f3f8cd793-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:45 [async_llm.py:261] Added request cmpl-78e0c3a088a84f468b06753f3f8cd793-0.
INFO 03-02 00:20:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:46 [logger.py:42] Received request cmpl-ff9ae313202449d182d0db071e3c52a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:46 [async_llm.py:261] Added request cmpl-ff9ae313202449d182d0db071e3c52a6-0.
INFO 03-02 00:20:47 [logger.py:42] Received request cmpl-55727c87bde943f4a93bb529fed951e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:47 [async_llm.py:261] Added request cmpl-55727c87bde943f4a93bb529fed951e6-0.
INFO 03-02 00:20:48 [logger.py:42] Received request cmpl-59cd762c48b7473cb9f24225533ec0d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:48 [async_llm.py:261] Added request cmpl-59cd762c48b7473cb9f24225533ec0d6-0.
INFO 03-02 00:20:49 [logger.py:42] Received request cmpl-8abb447bdfc1460395961bd91d865c7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:49 [async_llm.py:261] Added request cmpl-8abb447bdfc1460395961bd91d865c7a-0.
INFO 03-02 00:20:50 [logger.py:42] Received request cmpl-a48245d17f884001b361ef5467f5c89c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:50 [async_llm.py:261] Added request cmpl-a48245d17f884001b361ef5467f5c89c-0.
INFO 03-02 00:20:51 [logger.py:42] Received request cmpl-04b3e94f9e6d40f9b61729c7bf75943a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:51 [async_llm.py:261] Added request cmpl-04b3e94f9e6d40f9b61729c7bf75943a-0.
INFO 03-02 00:20:52 [logger.py:42] Received request cmpl-8194638de40d4216a48736a42efc3c60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:52 [async_llm.py:261] Added request cmpl-8194638de40d4216a48736a42efc3c60-0.
INFO 03-02 00:20:53 [logger.py:42] Received request cmpl-7123ac1a1f1c4b1eb34703a68aa35e10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:53 [async_llm.py:261] Added request cmpl-7123ac1a1f1c4b1eb34703a68aa35e10-0.
INFO 03-02 00:20:55 [logger.py:42] Received request cmpl-0d33b6486f514841b2de25ae7bd3ffd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:55 [async_llm.py:261] Added request cmpl-0d33b6486f514841b2de25ae7bd3ffd8-0.
INFO 03-02 00:20:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:20:56 [logger.py:42] Received request cmpl-7211c8928dd64016a6b11b940b91d7b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:56 [async_llm.py:261] Added request cmpl-7211c8928dd64016a6b11b940b91d7b2-0.
INFO 03-02 00:20:57 [logger.py:42] Received request cmpl-4466dd97f00b425a8b68a924b60c1390-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:57 [async_llm.py:261] Added request cmpl-4466dd97f00b425a8b68a924b60c1390-0.
INFO 03-02 00:20:58 [logger.py:42] Received request cmpl-7d2b689419a941c1b3194ccf38205865-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:58 [async_llm.py:261] Added request cmpl-7d2b689419a941c1b3194ccf38205865-0.
INFO 03-02 00:20:59 [logger.py:42] Received request cmpl-c9a4d22b1cbf40b68e9229e8cbb6f144-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:20:59 [async_llm.py:261] Added request cmpl-c9a4d22b1cbf40b68e9229e8cbb6f144-0.
INFO 03-02 00:21:00 [logger.py:42] Received request cmpl-abb219e0288047899a51ee53358558dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:00 [async_llm.py:261] Added request cmpl-abb219e0288047899a51ee53358558dd-0.
INFO 03-02 00:21:01 [logger.py:42] Received request cmpl-fb0772aab8e141ca882ce4be010f4520-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:01 [async_llm.py:261] Added request cmpl-fb0772aab8e141ca882ce4be010f4520-0.
INFO 03-02 00:21:02 [logger.py:42] Received request cmpl-88d5f6dac1cd4e38a9eeca0ffc7f79e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:02 [async_llm.py:261] Added request cmpl-88d5f6dac1cd4e38a9eeca0ffc7f79e9-0.
INFO 03-02 00:21:03 [logger.py:42] Received request cmpl-67872b4c859a4baf952354f6c9808921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:03 [async_llm.py:261] Added request cmpl-67872b4c859a4baf952354f6c9808921-0.
INFO 03-02 00:21:04 [logger.py:42] Received request cmpl-f2fa0921852343a5945e79b79fe768b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:04 [async_llm.py:261] Added request cmpl-f2fa0921852343a5945e79b79fe768b5-0.
INFO 03-02 00:21:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:05 [logger.py:42] Received request cmpl-f3c9025bcae84bc9ae84887663c6903e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:05 [async_llm.py:261] Added request cmpl-f3c9025bcae84bc9ae84887663c6903e-0.
INFO 03-02 00:21:07 [logger.py:42] Received request cmpl-01c10586856049818bced6f9f8cdae0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:07 [async_llm.py:261] Added request cmpl-01c10586856049818bced6f9f8cdae0e-0.
INFO 03-02 00:21:08 [logger.py:42] Received request cmpl-355038be74a947e19b6fe774cf53df6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:08 [async_llm.py:261] Added request cmpl-355038be74a947e19b6fe774cf53df6f-0.
INFO 03-02 00:21:09 [logger.py:42] Received request cmpl-f7b0ad2bd31a45e6a611abe9949b0cc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:09 [async_llm.py:261] Added request cmpl-f7b0ad2bd31a45e6a611abe9949b0cc1-0.
INFO 03-02 00:21:10 [logger.py:42] Received request cmpl-bbbd6a657bf1463597dc60c470aebd8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:10 [async_llm.py:261] Added request cmpl-bbbd6a657bf1463597dc60c470aebd8a-0.
INFO 03-02 00:21:11 [logger.py:42] Received request cmpl-07aaa38be8fc4958a754e50acd913114-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:11 [async_llm.py:261] Added request cmpl-07aaa38be8fc4958a754e50acd913114-0.
INFO 03-02 00:21:12 [logger.py:42] Received request cmpl-f679eb5da8d8428e94a19c9d675fd788-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:12 [async_llm.py:261] Added request cmpl-f679eb5da8d8428e94a19c9d675fd788-0.
INFO 03-02 00:21:13 [logger.py:42] Received request cmpl-babd186c69f14ff28b2abe15b380e172-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:13 [async_llm.py:261] Added request cmpl-babd186c69f14ff28b2abe15b380e172-0.
INFO 03-02 00:21:14 [logger.py:42] Received request cmpl-e3c7e77ed35547a6b449d097496c9b9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:14 [async_llm.py:261] Added request cmpl-e3c7e77ed35547a6b449d097496c9b9d-0.
INFO 03-02 00:21:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:15 [logger.py:42] Received request cmpl-ecad4fe85c974aefad1939696aa71fa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:15 [async_llm.py:261] Added request cmpl-ecad4fe85c974aefad1939696aa71fa0-0.
INFO 03-02 00:21:16 [logger.py:42] Received request cmpl-1c273d3211ba490b8de4a9004033fa42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:16 [async_llm.py:261] Added request cmpl-1c273d3211ba490b8de4a9004033fa42-0.
INFO 03-02 00:21:18 [logger.py:42] Received request cmpl-391f36661e604ef39782f55cda8fa823-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:18 [async_llm.py:261] Added request cmpl-391f36661e604ef39782f55cda8fa823-0.
INFO 03-02 00:21:19 [logger.py:42] Received request cmpl-4e20429d390841218876918c9082a234-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:19 [async_llm.py:261] Added request cmpl-4e20429d390841218876918c9082a234-0.
INFO 03-02 00:21:20 [logger.py:42] Received request cmpl-5e6c5e95b48b4209823cb70d7418a1c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:20 [async_llm.py:261] Added request cmpl-5e6c5e95b48b4209823cb70d7418a1c4-0.
INFO 03-02 00:21:21 [logger.py:42] Received request cmpl-55a6b5ce9842497f9e7a480fbacdd495-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:21 [async_llm.py:261] Added request cmpl-55a6b5ce9842497f9e7a480fbacdd495-0.
INFO 03-02 00:21:22 [logger.py:42] Received request cmpl-a53d3bea8ec94eb78fe81b79d209933b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:22 [async_llm.py:261] Added request cmpl-a53d3bea8ec94eb78fe81b79d209933b-0.
INFO 03-02 00:21:23 [logger.py:42] Received request cmpl-0cb793a211f94e40a2cfc55aca155081-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:23 [async_llm.py:261] Added request cmpl-0cb793a211f94e40a2cfc55aca155081-0.
INFO 03-02 00:21:24 [logger.py:42] Received request cmpl-c5fba69ade8647ee999d1b92305d81f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:24 [async_llm.py:261] Added request cmpl-c5fba69ade8647ee999d1b92305d81f7-0.
INFO 03-02 00:21:25 [logger.py:42] Received request cmpl-64983abb866e4d0686953ed673c50e6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:25 [async_llm.py:261] Added request cmpl-64983abb866e4d0686953ed673c50e6f-0.
INFO 03-02 00:21:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:26 [logger.py:42] Received request cmpl-4c12cfd3c580451e9ed7731d606780be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:26 [async_llm.py:261] Added request cmpl-4c12cfd3c580451e9ed7731d606780be-0.
INFO 03-02 00:21:27 [logger.py:42] Received request cmpl-dcf734cac11e48fcb2c00d26d8a22ae9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:27 [async_llm.py:261] Added request cmpl-dcf734cac11e48fcb2c00d26d8a22ae9-0.
INFO 03-02 00:21:28 [logger.py:42] Received request cmpl-29a7b92e645e43e9bcb0393964d81c17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:28 [async_llm.py:261] Added request cmpl-29a7b92e645e43e9bcb0393964d81c17-0.
INFO 03-02 00:21:30 [logger.py:42] Received request cmpl-7d25ffb9a845479ebe303e9c00cad170-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:30 [async_llm.py:261] Added request cmpl-7d25ffb9a845479ebe303e9c00cad170-0.
INFO 03-02 00:21:31 [logger.py:42] Received request cmpl-f0a9d10ca0e341e38bd582785c104b07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:31 [async_llm.py:261] Added request cmpl-f0a9d10ca0e341e38bd582785c104b07-0.
INFO 03-02 00:21:32 [logger.py:42] Received request cmpl-6c5b9c03ed434031805f502da42c0e0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:32 [async_llm.py:261] Added request cmpl-6c5b9c03ed434031805f502da42c0e0a-0.
INFO 03-02 00:21:33 [logger.py:42] Received request cmpl-ce27c80995424b9c89e1faf983c8eb7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:33 [async_llm.py:261] Added request cmpl-ce27c80995424b9c89e1faf983c8eb7a-0.
INFO 03-02 00:21:34 [logger.py:42] Received request cmpl-66fb53ac4cee46c493fadc8648e5f851-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:34 [async_llm.py:261] Added request cmpl-66fb53ac4cee46c493fadc8648e5f851-0.
INFO 03-02 00:21:35 [logger.py:42] Received request cmpl-64053e3622094365b5f9361e5a806c23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:35 [async_llm.py:261] Added request cmpl-64053e3622094365b5f9361e5a806c23-0.
INFO 03-02 00:21:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:36 [logger.py:42] Received request cmpl-117f9363871c44bba15efabaf3f4aad8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:36 [async_llm.py:261] Added request cmpl-117f9363871c44bba15efabaf3f4aad8-0.
INFO 03-02 00:21:37 [logger.py:42] Received request cmpl-c714f456a9bf4a0e867b9fee00e64edd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:37 [async_llm.py:261] Added request cmpl-c714f456a9bf4a0e867b9fee00e64edd-0.
INFO 03-02 00:21:38 [logger.py:42] Received request cmpl-87d17bb9c05e40ddba15d5079deba6e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:38 [async_llm.py:261] Added request cmpl-87d17bb9c05e40ddba15d5079deba6e1-0.
INFO 03-02 00:21:39 [logger.py:42] Received request cmpl-a08279cd344b458cb0b0601785ed4872-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:39 [async_llm.py:261] Added request cmpl-a08279cd344b458cb0b0601785ed4872-0.
INFO 03-02 00:21:41 [logger.py:42] Received request cmpl-4f18fb8b6f9e4bb98a361bd60fdd83b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:41 [async_llm.py:261] Added request cmpl-4f18fb8b6f9e4bb98a361bd60fdd83b7-0.
INFO 03-02 00:21:42 [logger.py:42] Received request cmpl-4022368eb14c4da89c9b1c37f4f9c123-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:42 [async_llm.py:261] Added request cmpl-4022368eb14c4da89c9b1c37f4f9c123-0.
INFO 03-02 00:21:43 [logger.py:42] Received request cmpl-ab58fdb25e724e8195bfb1ddc2c34a41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:43 [async_llm.py:261] Added request cmpl-ab58fdb25e724e8195bfb1ddc2c34a41-0.
INFO 03-02 00:21:44 [logger.py:42] Received request cmpl-b76cc93348c74748b5a0f9a7afa7587b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:44 [async_llm.py:261] Added request cmpl-b76cc93348c74748b5a0f9a7afa7587b-0.
INFO 03-02 00:21:45 [logger.py:42] Received request cmpl-c5e33bcd0cd141b4834c43f6ab3b0a7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:45 [async_llm.py:261] Added request cmpl-c5e33bcd0cd141b4834c43f6ab3b0a7d-0.
INFO 03-02 00:21:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:46 [logger.py:42] Received request cmpl-fe9772f366564d7d9a120b0409b66dfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:46 [async_llm.py:261] Added request cmpl-fe9772f366564d7d9a120b0409b66dfb-0.
INFO 03-02 00:21:47 [logger.py:42] Received request cmpl-f640304d3d164f58b870e092523435cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:47 [async_llm.py:261] Added request cmpl-f640304d3d164f58b870e092523435cf-0.
INFO 03-02 00:21:48 [logger.py:42] Received request cmpl-f4c6cb2068c246e68bbe396134fc0afb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:48 [async_llm.py:261] Added request cmpl-f4c6cb2068c246e68bbe396134fc0afb-0.
INFO 03-02 00:21:49 [logger.py:42] Received request cmpl-b7a7aca8ee2d48ea82922203f4028020-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:49 [async_llm.py:261] Added request cmpl-b7a7aca8ee2d48ea82922203f4028020-0.
INFO 03-02 00:21:50 [logger.py:42] Received request cmpl-7f95f954ac5e486d9086116d170aca43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:50 [async_llm.py:261] Added request cmpl-7f95f954ac5e486d9086116d170aca43-0.
INFO 03-02 00:21:51 [logger.py:42] Received request cmpl-2d1e9c0c3ded493f8754aa38ca4a3a8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:51 [async_llm.py:261] Added request cmpl-2d1e9c0c3ded493f8754aa38ca4a3a8b-0.
INFO 03-02 00:21:53 [logger.py:42] Received request cmpl-d79de9dd581242bf87ed66a4a6359acf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:53 [async_llm.py:261] Added request cmpl-d79de9dd581242bf87ed66a4a6359acf-0.
INFO 03-02 00:21:54 [logger.py:42] Received request cmpl-98018c956c854d55b55f3801cea2407a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:54 [async_llm.py:261] Added request cmpl-98018c956c854d55b55f3801cea2407a-0.
INFO 03-02 00:21:55 [logger.py:42] Received request cmpl-be462a84a7f14e2d8c2a138b686694fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:55 [async_llm.py:261] Added request cmpl-be462a84a7f14e2d8c2a138b686694fb-0.
INFO 03-02 00:21:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:21:56 [logger.py:42] Received request cmpl-04a982ff8f8d44558e50608976a91809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:56 [async_llm.py:261] Added request cmpl-04a982ff8f8d44558e50608976a91809-0.
INFO 03-02 00:21:57 [logger.py:42] Received request cmpl-8dc18c3f064149a08d035fc848ced9fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:57 [async_llm.py:261] Added request cmpl-8dc18c3f064149a08d035fc848ced9fb-0.
INFO 03-02 00:21:58 [logger.py:42] Received request cmpl-6f477e32d99f4a95913a5ea8427861e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:58 [async_llm.py:261] Added request cmpl-6f477e32d99f4a95913a5ea8427861e3-0.
INFO 03-02 00:21:59 [logger.py:42] Received request cmpl-64fd3cd9713b4c5cb632f8b9a075a2e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:21:59 [async_llm.py:261] Added request cmpl-64fd3cd9713b4c5cb632f8b9a075a2e1-0.
INFO 03-02 00:22:00 [logger.py:42] Received request cmpl-a49a6503330e491f8bf3d5ae01ddca2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:00 [async_llm.py:261] Added request cmpl-a49a6503330e491f8bf3d5ae01ddca2b-0.
INFO 03-02 00:22:01 [logger.py:42] Received request cmpl-57d415b67dec4ae3b1c75c352e5794fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:01 [async_llm.py:261] Added request cmpl-57d415b67dec4ae3b1c75c352e5794fb-0.
INFO 03-02 00:22:02 [logger.py:42] Received request cmpl-2819f9a43ee74a7bb9422f128001fac3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:02 [async_llm.py:261] Added request cmpl-2819f9a43ee74a7bb9422f128001fac3-0.
INFO 03-02 00:22:04 [logger.py:42] Received request cmpl-68539f3b6cc147e0903a5b5910adc8df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:04 [async_llm.py:261] Added request cmpl-68539f3b6cc147e0903a5b5910adc8df-0.
INFO 03-02 00:22:05 [logger.py:42] Received request cmpl-d19aa0ccc63f462fa1818e00caa8c233-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:05 [async_llm.py:261] Added request cmpl-d19aa0ccc63f462fa1818e00caa8c233-0.
INFO 03-02 00:22:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:06 [logger.py:42] Received request cmpl-a250733f40e64415a789399bc65b9f14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:06 [async_llm.py:261] Added request cmpl-a250733f40e64415a789399bc65b9f14-0.
INFO 03-02 00:22:07 [logger.py:42] Received request cmpl-37d83551a68b44449746ceb63ff417c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:07 [async_llm.py:261] Added request cmpl-37d83551a68b44449746ceb63ff417c1-0.
INFO 03-02 00:22:08 [logger.py:42] Received request cmpl-d9ef285d82a5447fa4336f1b33ccba08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:08 [async_llm.py:261] Added request cmpl-d9ef285d82a5447fa4336f1b33ccba08-0.
INFO 03-02 00:22:09 [logger.py:42] Received request cmpl-ab34e4bc9ab943698a61fcc79a88b0ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:09 [async_llm.py:261] Added request cmpl-ab34e4bc9ab943698a61fcc79a88b0ca-0.
INFO 03-02 00:22:10 [logger.py:42] Received request cmpl-cff7eaf662e44cd8bae70ecb41898246-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:10 [async_llm.py:261] Added request cmpl-cff7eaf662e44cd8bae70ecb41898246-0.
INFO 03-02 00:22:11 [logger.py:42] Received request cmpl-cd0846445ae942ae998b004f8d8a84a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:11 [async_llm.py:261] Added request cmpl-cd0846445ae942ae998b004f8d8a84a9-0.
INFO 03-02 00:22:12 [logger.py:42] Received request cmpl-6f65ae5f43b2441d89dc2344295be560-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:12 [async_llm.py:261] Added request cmpl-6f65ae5f43b2441d89dc2344295be560-0.
INFO 03-02 00:22:13 [logger.py:42] Received request cmpl-6090cdec2b6447e9a8894d8502e09d27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:13 [async_llm.py:261] Added request cmpl-6090cdec2b6447e9a8894d8502e09d27-0.
INFO 03-02 00:22:15 [logger.py:42] Received request cmpl-28bdbf03b9d14a709ba2d67470524917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:15 [async_llm.py:261] Added request cmpl-28bdbf03b9d14a709ba2d67470524917-0.
INFO 03-02 00:22:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:16 [logger.py:42] Received request cmpl-85a7a918b0df4f65959894be8da01cf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:16 [async_llm.py:261] Added request cmpl-85a7a918b0df4f65959894be8da01cf5-0.
INFO 03-02 00:22:17 [logger.py:42] Received request cmpl-a0e45b788ea24e0b94aac82a85d514b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:17 [async_llm.py:261] Added request cmpl-a0e45b788ea24e0b94aac82a85d514b3-0.
INFO 03-02 00:22:18 [logger.py:42] Received request cmpl-dfd6e06de58e4f9a9cad7f36533125a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:18 [async_llm.py:261] Added request cmpl-dfd6e06de58e4f9a9cad7f36533125a2-0.
INFO 03-02 00:22:19 [logger.py:42] Received request cmpl-95063752c1534ef195b3e59780384cdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:19 [async_llm.py:261] Added request cmpl-95063752c1534ef195b3e59780384cdd-0.
INFO 03-02 00:22:20 [logger.py:42] Received request cmpl-1da5e5eabe8c47459b7773c5549061d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:20 [async_llm.py:261] Added request cmpl-1da5e5eabe8c47459b7773c5549061d8-0.
INFO 03-02 00:22:21 [logger.py:42] Received request cmpl-3a75307dd85046ba8af30da77309dae5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:21 [async_llm.py:261] Added request cmpl-3a75307dd85046ba8af30da77309dae5-0.
INFO 03-02 00:22:22 [logger.py:42] Received request cmpl-4456cb836bcb41daa8dac048d99a70fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:22 [async_llm.py:261] Added request cmpl-4456cb836bcb41daa8dac048d99a70fc-0.
INFO 03-02 00:22:23 [logger.py:42] Received request cmpl-52304007f29c4c24aec61865015c4e75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:23 [async_llm.py:261] Added request cmpl-52304007f29c4c24aec61865015c4e75-0.
INFO 03-02 00:22:24 [logger.py:42] Received request cmpl-6fa5ce2a63654fa38280d51e3f2f7b3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:24 [async_llm.py:261] Added request cmpl-6fa5ce2a63654fa38280d51e3f2f7b3c-0.
INFO 03-02 00:22:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:25 [logger.py:42] Received request cmpl-e27772a10b7c48b39f749eefffc7ab67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:25 [async_llm.py:261] Added request cmpl-e27772a10b7c48b39f749eefffc7ab67-0.
INFO 03-02 00:22:27 [logger.py:42] Received request cmpl-e0486409b9ed4ee7877da0cfe15e7cf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:27 [async_llm.py:261] Added request cmpl-e0486409b9ed4ee7877da0cfe15e7cf0-0.
INFO 03-02 00:22:28 [logger.py:42] Received request cmpl-c652cbc40f824db7a447945d207cd0e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:28 [async_llm.py:261] Added request cmpl-c652cbc40f824db7a447945d207cd0e7-0.
INFO 03-02 00:22:29 [logger.py:42] Received request cmpl-ffcc806eb11b4a5b98612f801a1c968f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:29 [async_llm.py:261] Added request cmpl-ffcc806eb11b4a5b98612f801a1c968f-0.
INFO 03-02 00:22:30 [logger.py:42] Received request cmpl-c62555b071474f9498fce068fb93699f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:30 [async_llm.py:261] Added request cmpl-c62555b071474f9498fce068fb93699f-0.
INFO 03-02 00:22:31 [logger.py:42] Received request cmpl-1a883ae7627d46b98c417ca5c7cc4c15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:31 [async_llm.py:261] Added request cmpl-1a883ae7627d46b98c417ca5c7cc4c15-0.
INFO 03-02 00:22:32 [logger.py:42] Received request cmpl-a083726358824503a76a23c525f4ec16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:32 [async_llm.py:261] Added request cmpl-a083726358824503a76a23c525f4ec16-0.
INFO 03-02 00:22:33 [logger.py:42] Received request cmpl-9ee451b0e42142e89c8a2b91aa8cce45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:33 [async_llm.py:261] Added request cmpl-9ee451b0e42142e89c8a2b91aa8cce45-0.
INFO 03-02 00:22:34 [logger.py:42] Received request cmpl-4843569e7e264748aad73506c3a51ab2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:34 [async_llm.py:261] Added request cmpl-4843569e7e264748aad73506c3a51ab2-0.
INFO 03-02 00:22:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:35 [logger.py:42] Received request cmpl-76e92bd638794003bd47abaf35df1832-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:35 [async_llm.py:261] Added request cmpl-76e92bd638794003bd47abaf35df1832-0.
INFO 03-02 00:22:36 [logger.py:42] Received request cmpl-865b83b83e7647de9f9e8084fc2ed96f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:36 [async_llm.py:261] Added request cmpl-865b83b83e7647de9f9e8084fc2ed96f-0.
INFO 03-02 00:22:38 [logger.py:42] Received request cmpl-89a593bd94574867be55bf3f37f76768-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:38 [async_llm.py:261] Added request cmpl-89a593bd94574867be55bf3f37f76768-0.
INFO 03-02 00:22:39 [logger.py:42] Received request cmpl-456fd8d408c046a4b8170ad6d03dc620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:39 [async_llm.py:261] Added request cmpl-456fd8d408c046a4b8170ad6d03dc620-0.
INFO 03-02 00:22:40 [logger.py:42] Received request cmpl-aeaaeea43e114dd6a52014174e4af48e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:40 [async_llm.py:261] Added request cmpl-aeaaeea43e114dd6a52014174e4af48e-0.
INFO 03-02 00:22:41 [logger.py:42] Received request cmpl-47f04b403601453e84c0a82d936ab1a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:41 [async_llm.py:261] Added request cmpl-47f04b403601453e84c0a82d936ab1a8-0.
INFO 03-02 00:22:42 [logger.py:42] Received request cmpl-67e4eb9a857146e38fe3d81d91e57930-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:42 [async_llm.py:261] Added request cmpl-67e4eb9a857146e38fe3d81d91e57930-0.
INFO 03-02 00:22:43 [logger.py:42] Received request cmpl-446e41a2fd1c4f37b5f7907fde9e049b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:43 [async_llm.py:261] Added request cmpl-446e41a2fd1c4f37b5f7907fde9e049b-0.
INFO 03-02 00:22:44 [logger.py:42] Received request cmpl-7b9654ad5e70428890207bcbaa82e1d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:44 [async_llm.py:261] Added request cmpl-7b9654ad5e70428890207bcbaa82e1d8-0.
INFO 03-02 00:22:45 [logger.py:42] Received request cmpl-cc6ced766572408f8921fb1af47b3f6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:45 [async_llm.py:261] Added request cmpl-cc6ced766572408f8921fb1af47b3f6d-0.
INFO 03-02 00:22:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:46 [logger.py:42] Received request cmpl-9cbd030e2d2844ea8aaa7a905cf941b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:46 [async_llm.py:261] Added request cmpl-9cbd030e2d2844ea8aaa7a905cf941b5-0.
INFO 03-02 00:22:47 [logger.py:42] Received request cmpl-5835f9b98c7f45ecb1fbae0a3d07541c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:47 [async_llm.py:261] Added request cmpl-5835f9b98c7f45ecb1fbae0a3d07541c-0.
INFO 03-02 00:22:48 [logger.py:42] Received request cmpl-dfa3fe4716f14bc78a12b888ea600590-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:48 [async_llm.py:261] Added request cmpl-dfa3fe4716f14bc78a12b888ea600590-0.
INFO 03-02 00:22:50 [logger.py:42] Received request cmpl-0f65630fa9d04e5096f18ee029c07ce0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:50 [async_llm.py:261] Added request cmpl-0f65630fa9d04e5096f18ee029c07ce0-0.
INFO 03-02 00:22:51 [logger.py:42] Received request cmpl-65404d4df1fa405593b2f201cc56cf87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:51 [async_llm.py:261] Added request cmpl-65404d4df1fa405593b2f201cc56cf87-0.
INFO 03-02 00:22:52 [logger.py:42] Received request cmpl-ae059369501d47ca88cb273c4c5ee536-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:52 [async_llm.py:261] Added request cmpl-ae059369501d47ca88cb273c4c5ee536-0.
INFO 03-02 00:22:53 [logger.py:42] Received request cmpl-73a77de445d24727ba7647431046c24f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:53 [async_llm.py:261] Added request cmpl-73a77de445d24727ba7647431046c24f-0.
INFO 03-02 00:22:54 [logger.py:42] Received request cmpl-36e59914c1a5461cab8eb11334b9c405-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:54 [async_llm.py:261] Added request cmpl-36e59914c1a5461cab8eb11334b9c405-0.
INFO 03-02 00:22:55 [logger.py:42] Received request cmpl-b1b3b5a86c2d46dbb3d43b18c5552e7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:55 [async_llm.py:261] Added request cmpl-b1b3b5a86c2d46dbb3d43b18c5552e7c-0.
INFO 03-02 00:22:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:22:56 [logger.py:42] Received request cmpl-4c3a684877464442b09a5b9a8e2807de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:56 [async_llm.py:261] Added request cmpl-4c3a684877464442b09a5b9a8e2807de-0.
INFO 03-02 00:22:57 [logger.py:42] Received request cmpl-94e95349f0784f0c9ebb1689e43a62c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:57 [async_llm.py:261] Added request cmpl-94e95349f0784f0c9ebb1689e43a62c9-0.
INFO 03-02 00:22:58 [logger.py:42] Received request cmpl-21eb62698be8497c8a595db054f8c801-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:58 [async_llm.py:261] Added request cmpl-21eb62698be8497c8a595db054f8c801-0.
INFO 03-02 00:22:59 [logger.py:42] Received request cmpl-f7927a5695e448bfbdb21293f7f0ec5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:22:59 [async_llm.py:261] Added request cmpl-f7927a5695e448bfbdb21293f7f0ec5f-0.
INFO 03-02 00:23:01 [logger.py:42] Received request cmpl-b45c6ea45ddb4aaea65b5e33a65bdff7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:01 [async_llm.py:261] Added request cmpl-b45c6ea45ddb4aaea65b5e33a65bdff7-0.
INFO 03-02 00:23:02 [logger.py:42] Received request cmpl-79d76d078e4e43be9c84ea998c863ca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:02 [async_llm.py:261] Added request cmpl-79d76d078e4e43be9c84ea998c863ca6-0.
INFO 03-02 00:23:03 [logger.py:42] Received request cmpl-e1fb96424cf24fe1832a653982e3ff83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:03 [async_llm.py:261] Added request cmpl-e1fb96424cf24fe1832a653982e3ff83-0.
INFO 03-02 00:23:04 [logger.py:42] Received request cmpl-66d5019098cb43b3beb28c1e9ae1a6b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:04 [async_llm.py:261] Added request cmpl-66d5019098cb43b3beb28c1e9ae1a6b2-0.
INFO 03-02 00:23:05 [logger.py:42] Received request cmpl-fab19c269d1d48f5ab8433c85fc2e2f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:05 [async_llm.py:261] Added request cmpl-fab19c269d1d48f5ab8433c85fc2e2f3-0.
INFO 03-02 00:23:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:06 [logger.py:42] Received request cmpl-b9770e0211b04372bb1193bc41af8960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:06 [async_llm.py:261] Added request cmpl-b9770e0211b04372bb1193bc41af8960-0.
INFO 03-02 00:23:07 [logger.py:42] Received request cmpl-bfb8a0ebc071436e9d09b108cbbbf162-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:07 [async_llm.py:261] Added request cmpl-bfb8a0ebc071436e9d09b108cbbbf162-0.
INFO 03-02 00:23:08 [logger.py:42] Received request cmpl-6b6f3c40ed2f41399537fabdda779e46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:08 [async_llm.py:261] Added request cmpl-6b6f3c40ed2f41399537fabdda779e46-0.
INFO 03-02 00:23:09 [logger.py:42] Received request cmpl-e7766d2297254a2abf100efec8a89235-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:09 [async_llm.py:261] Added request cmpl-e7766d2297254a2abf100efec8a89235-0.
INFO 03-02 00:23:10 [logger.py:42] Received request cmpl-85672a5dc721481292d8659bde2ca469-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:10 [async_llm.py:261] Added request cmpl-85672a5dc721481292d8659bde2ca469-0.
INFO 03-02 00:23:11 [logger.py:42] Received request cmpl-0af83e8c03d7427ab52dca55f2af119c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:11 [async_llm.py:261] Added request cmpl-0af83e8c03d7427ab52dca55f2af119c-0.
INFO 03-02 00:23:13 [logger.py:42] Received request cmpl-1ccd6474cbbc418b8d0c50f2a0359eb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:13 [async_llm.py:261] Added request cmpl-1ccd6474cbbc418b8d0c50f2a0359eb4-0.
INFO 03-02 00:23:14 [logger.py:42] Received request cmpl-03e219b20e26425694cecc882e5ad83c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:14 [async_llm.py:261] Added request cmpl-03e219b20e26425694cecc882e5ad83c-0.
INFO 03-02 00:23:15 [logger.py:42] Received request cmpl-3d8ef8ee5cae4ddcaf01b087fa2efd5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:15 [async_llm.py:261] Added request cmpl-3d8ef8ee5cae4ddcaf01b087fa2efd5b-0.
INFO 03-02 00:23:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:16 [logger.py:42] Received request cmpl-a804d6278f6e4f1e9e9e0a6411bce71a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:16 [async_llm.py:261] Added request cmpl-a804d6278f6e4f1e9e9e0a6411bce71a-0.
INFO 03-02 00:23:17 [logger.py:42] Received request cmpl-7a09327a907e4fc7981ad13f74c61b91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:17 [async_llm.py:261] Added request cmpl-7a09327a907e4fc7981ad13f74c61b91-0.
INFO 03-02 00:23:18 [logger.py:42] Received request cmpl-d5cf938f44f44a3bb81af8a208c488a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:18 [async_llm.py:261] Added request cmpl-d5cf938f44f44a3bb81af8a208c488a6-0.
INFO 03-02 00:23:19 [logger.py:42] Received request cmpl-f393f49daa9540ffac8198b90fbe55b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:19 [async_llm.py:261] Added request cmpl-f393f49daa9540ffac8198b90fbe55b4-0.
INFO 03-02 00:23:20 [logger.py:42] Received request cmpl-8f67a1bf264e43f38f50b69fbbae6217-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:20 [async_llm.py:261] Added request cmpl-8f67a1bf264e43f38f50b69fbbae6217-0.
INFO 03-02 00:23:21 [logger.py:42] Received request cmpl-58a212b1d18f40e4b4c743704ceb2024-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:21 [async_llm.py:261] Added request cmpl-58a212b1d18f40e4b4c743704ceb2024-0.
INFO 03-02 00:23:22 [logger.py:42] Received request cmpl-ce5d9f0d88194f83839c6a55e322c31d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:22 [async_llm.py:261] Added request cmpl-ce5d9f0d88194f83839c6a55e322c31d-0.
INFO 03-02 00:23:24 [logger.py:42] Received request cmpl-f161cebe4bcc45ea812dfa5bb3c93b3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:24 [async_llm.py:261] Added request cmpl-f161cebe4bcc45ea812dfa5bb3c93b3d-0.
INFO 03-02 00:23:25 [logger.py:42] Received request cmpl-223835daf3934f29bfcda42cfe3b60ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:25 [async_llm.py:261] Added request cmpl-223835daf3934f29bfcda42cfe3b60ae-0.
INFO 03-02 00:23:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:26 [logger.py:42] Received request cmpl-4294abcbf9a04522867e2ec8db8f9308-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:26 [async_llm.py:261] Added request cmpl-4294abcbf9a04522867e2ec8db8f9308-0.
INFO 03-02 00:23:27 [logger.py:42] Received request cmpl-96440070e39a43c280a279ad51a409c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:27 [async_llm.py:261] Added request cmpl-96440070e39a43c280a279ad51a409c1-0.
INFO 03-02 00:23:28 [logger.py:42] Received request cmpl-7d45ff58d5614be9a123f5fc36412898-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:28 [async_llm.py:261] Added request cmpl-7d45ff58d5614be9a123f5fc36412898-0.
INFO 03-02 00:23:29 [logger.py:42] Received request cmpl-e1c5c4b15df348f1add9ee62c66e7085-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:29 [async_llm.py:261] Added request cmpl-e1c5c4b15df348f1add9ee62c66e7085-0.
INFO 03-02 00:23:30 [logger.py:42] Received request cmpl-78000081468d40da83aec4c6a97fa9f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:30 [async_llm.py:261] Added request cmpl-78000081468d40da83aec4c6a97fa9f5-0.
INFO 03-02 00:23:31 [logger.py:42] Received request cmpl-e10a434e38d74e6986c39ec94eefce0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:31 [async_llm.py:261] Added request cmpl-e10a434e38d74e6986c39ec94eefce0e-0.
INFO 03-02 00:23:32 [logger.py:42] Received request cmpl-9b8f39010ff94d9caa7560a5da53e28b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:32 [async_llm.py:261] Added request cmpl-9b8f39010ff94d9caa7560a5da53e28b-0.
INFO 03-02 00:23:33 [logger.py:42] Received request cmpl-574c68ace1934cbda1d75d03b822582d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:33 [async_llm.py:261] Added request cmpl-574c68ace1934cbda1d75d03b822582d-0.
INFO 03-02 00:23:34 [logger.py:42] Received request cmpl-5a230e9b530243d2be6d4dd43adb0a78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:34 [async_llm.py:261] Added request cmpl-5a230e9b530243d2be6d4dd43adb0a78-0.
INFO 03-02 00:23:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:36 [logger.py:42] Received request cmpl-1dcd8cc5900c4ab4acfaddd6d75bca49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:36 [async_llm.py:261] Added request cmpl-1dcd8cc5900c4ab4acfaddd6d75bca49-0.
INFO 03-02 00:23:37 [logger.py:42] Received request cmpl-fae997ba24174d7189dc9b5a51fb16dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:37 [async_llm.py:261] Added request cmpl-fae997ba24174d7189dc9b5a51fb16dd-0.
INFO 03-02 00:23:38 [logger.py:42] Received request cmpl-fcbfb22dd5c848afa343a01d6515d53d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:38 [async_llm.py:261] Added request cmpl-fcbfb22dd5c848afa343a01d6515d53d-0.
INFO 03-02 00:23:39 [logger.py:42] Received request cmpl-e1976c61aad74e418ab9b33918aa5eda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:39 [async_llm.py:261] Added request cmpl-e1976c61aad74e418ab9b33918aa5eda-0.
INFO 03-02 00:23:40 [logger.py:42] Received request cmpl-bf03f8c58746497391bfe3daf0b1718c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:40 [async_llm.py:261] Added request cmpl-bf03f8c58746497391bfe3daf0b1718c-0.
INFO 03-02 00:23:41 [logger.py:42] Received request cmpl-35284b155f6d4991b382aca8916eedc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:41 [async_llm.py:261] Added request cmpl-35284b155f6d4991b382aca8916eedc5-0.
INFO 03-02 00:23:42 [logger.py:42] Received request cmpl-8dd7e9e844e749138ee674ef80d58eb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:42 [async_llm.py:261] Added request cmpl-8dd7e9e844e749138ee674ef80d58eb2-0.
INFO 03-02 00:23:43 [logger.py:42] Received request cmpl-afc5d2760d2f4397928e55496e64f43e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:43 [async_llm.py:261] Added request cmpl-afc5d2760d2f4397928e55496e64f43e-0.
INFO 03-02 00:23:44 [logger.py:42] Received request cmpl-09f2425973354d709dc3e819d8c0f18a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:44 [async_llm.py:261] Added request cmpl-09f2425973354d709dc3e819d8c0f18a-0.
INFO 03-02 00:23:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:45 [logger.py:42] Received request cmpl-4c499810e6ab4fce9e6442f69f1ea844-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:45 [async_llm.py:261] Added request cmpl-4c499810e6ab4fce9e6442f69f1ea844-0.
INFO 03-02 00:23:47 [logger.py:42] Received request cmpl-af22dd64ad4b439a86d06bd9e5998056-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:47 [async_llm.py:261] Added request cmpl-af22dd64ad4b439a86d06bd9e5998056-0.
INFO 03-02 00:23:48 [logger.py:42] Received request cmpl-50ac73dfe0034b6dbf64ef51adbaf664-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:48 [async_llm.py:261] Added request cmpl-50ac73dfe0034b6dbf64ef51adbaf664-0.
INFO 03-02 00:23:49 [logger.py:42] Received request cmpl-1db4bb924f964a6fbafbb74b17777e31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:49 [async_llm.py:261] Added request cmpl-1db4bb924f964a6fbafbb74b17777e31-0.
INFO 03-02 00:23:50 [logger.py:42] Received request cmpl-39d27042e36541aa9513d7506ee6e479-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:50 [async_llm.py:261] Added request cmpl-39d27042e36541aa9513d7506ee6e479-0.
INFO 03-02 00:23:51 [logger.py:42] Received request cmpl-b41c788ba54047ce89206db08b17198b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:51 [async_llm.py:261] Added request cmpl-b41c788ba54047ce89206db08b17198b-0.
INFO 03-02 00:23:52 [logger.py:42] Received request cmpl-fb72c858ac5f4908a854164028effc96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:52 [async_llm.py:261] Added request cmpl-fb72c858ac5f4908a854164028effc96-0.
INFO 03-02 00:23:53 [logger.py:42] Received request cmpl-08bc9a895ae045798de8837021ffd6f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:53 [async_llm.py:261] Added request cmpl-08bc9a895ae045798de8837021ffd6f0-0.
INFO 03-02 00:23:54 [logger.py:42] Received request cmpl-dfb681dd32694165a4ce25de71dc0457-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:54 [async_llm.py:261] Added request cmpl-dfb681dd32694165a4ce25de71dc0457-0.
INFO 03-02 00:23:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:23:55 [logger.py:42] Received request cmpl-f90f2ec80c384dea91dd08813228fe80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:55 [async_llm.py:261] Added request cmpl-f90f2ec80c384dea91dd08813228fe80-0.
INFO 03-02 00:23:56 [logger.py:42] Received request cmpl-ac49d92c8b154212bcffa050e8cd41d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:56 [async_llm.py:261] Added request cmpl-ac49d92c8b154212bcffa050e8cd41d9-0.
INFO 03-02 00:23:57 [logger.py:42] Received request cmpl-669a20cb3ab441b68a25ab45f76f0dc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:57 [async_llm.py:261] Added request cmpl-669a20cb3ab441b68a25ab45f76f0dc7-0.
INFO 03-02 00:23:59 [logger.py:42] Received request cmpl-54faa62c6e9d43558cf6c0a697006152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:23:59 [async_llm.py:261] Added request cmpl-54faa62c6e9d43558cf6c0a697006152-0.
INFO 03-02 00:24:00 [logger.py:42] Received request cmpl-3183c16311a54874b017797f12869641-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:00 [async_llm.py:261] Added request cmpl-3183c16311a54874b017797f12869641-0.
INFO 03-02 00:24:01 [logger.py:42] Received request cmpl-41127e27104e459e8b8cfa0190421507-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:01 [async_llm.py:261] Added request cmpl-41127e27104e459e8b8cfa0190421507-0.
INFO 03-02 00:24:02 [logger.py:42] Received request cmpl-89a8b05c15ba4131a2ce15cff0c1535d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:02 [async_llm.py:261] Added request cmpl-89a8b05c15ba4131a2ce15cff0c1535d-0.
INFO 03-02 00:24:03 [logger.py:42] Received request cmpl-9f9a5fd69d6544f8bd95a8731fe63b0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:03 [async_llm.py:261] Added request cmpl-9f9a5fd69d6544f8bd95a8731fe63b0e-0.
INFO 03-02 00:24:04 [logger.py:42] Received request cmpl-00863d6835964812913bb267274e57da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:04 [async_llm.py:261] Added request cmpl-00863d6835964812913bb267274e57da-0.
INFO 03-02 00:24:05 [logger.py:42] Received request cmpl-d942381d8650437e887c5b0f02588def-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:05 [async_llm.py:261] Added request cmpl-d942381d8650437e887c5b0f02588def-0.
INFO 03-02 00:24:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:06 [logger.py:42] Received request cmpl-848da9a4cda64089a36e2a2d7fe171b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:06 [async_llm.py:261] Added request cmpl-848da9a4cda64089a36e2a2d7fe171b1-0.
INFO 03-02 00:24:07 [logger.py:42] Received request cmpl-0571feeaebaa4b578f1d46eac1ac52dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:07 [async_llm.py:261] Added request cmpl-0571feeaebaa4b578f1d46eac1ac52dc-0.
INFO 03-02 00:24:08 [logger.py:42] Received request cmpl-8271323fe9e24d3f92a18a05a6c4514a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:08 [async_llm.py:261] Added request cmpl-8271323fe9e24d3f92a18a05a6c4514a-0.
INFO 03-02 00:24:10 [logger.py:42] Received request cmpl-af96c1e2cb7a4d308d6a8c13e5956948-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:10 [async_llm.py:261] Added request cmpl-af96c1e2cb7a4d308d6a8c13e5956948-0.
INFO 03-02 00:24:11 [logger.py:42] Received request cmpl-9eeb494cbbc54655b63f144cc4a53f10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:11 [async_llm.py:261] Added request cmpl-9eeb494cbbc54655b63f144cc4a53f10-0.
INFO 03-02 00:24:12 [logger.py:42] Received request cmpl-8d9aba08bff84e63906a60bcc24b8f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:12 [async_llm.py:261] Added request cmpl-8d9aba08bff84e63906a60bcc24b8f25-0.
INFO 03-02 00:24:13 [logger.py:42] Received request cmpl-3b397342414f45e09013aef377ae69d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:13 [async_llm.py:261] Added request cmpl-3b397342414f45e09013aef377ae69d4-0.
INFO 03-02 00:24:14 [logger.py:42] Received request cmpl-c6866f99cd3b4ef18de2c3da525fd87c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:14 [async_llm.py:261] Added request cmpl-c6866f99cd3b4ef18de2c3da525fd87c-0.
INFO 03-02 00:24:15 [logger.py:42] Received request cmpl-ed9901c2e8194889ae60b41f33e5eeb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:15 [async_llm.py:261] Added request cmpl-ed9901c2e8194889ae60b41f33e5eeb5-0.
INFO 03-02 00:24:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:16 [logger.py:42] Received request cmpl-2518da173ba144b0ad2d488f55ca5b22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:16 [async_llm.py:261] Added request cmpl-2518da173ba144b0ad2d488f55ca5b22-0.
INFO 03-02 00:24:17 [logger.py:42] Received request cmpl-136a850f057d41a8b28cac79ed204e38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:17 [async_llm.py:261] Added request cmpl-136a850f057d41a8b28cac79ed204e38-0.
INFO 03-02 00:24:18 [logger.py:42] Received request cmpl-dd5f8353857a45a3814b8f067b87f6c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:18 [async_llm.py:261] Added request cmpl-dd5f8353857a45a3814b8f067b87f6c4-0.
INFO 03-02 00:24:19 [logger.py:42] Received request cmpl-66b00f1550de4ddc910fba2ae7416905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:19 [async_llm.py:261] Added request cmpl-66b00f1550de4ddc910fba2ae7416905-0.
INFO 03-02 00:24:20 [logger.py:42] Received request cmpl-a51fe0f79dbc4ec2913fc2e5f7a005c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:20 [async_llm.py:261] Added request cmpl-a51fe0f79dbc4ec2913fc2e5f7a005c1-0.
INFO 03-02 00:24:22 [logger.py:42] Received request cmpl-d297ac9e6662454997742e35c8ad92db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:22 [async_llm.py:261] Added request cmpl-d297ac9e6662454997742e35c8ad92db-0.
INFO 03-02 00:24:23 [logger.py:42] Received request cmpl-0a59742c4557450d8c881b0d24de12f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:23 [async_llm.py:261] Added request cmpl-0a59742c4557450d8c881b0d24de12f5-0.
INFO 03-02 00:24:24 [logger.py:42] Received request cmpl-baddd66d3c0e451baaa0576e726b62a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:24 [async_llm.py:261] Added request cmpl-baddd66d3c0e451baaa0576e726b62a6-0.
INFO 03-02 00:24:25 [logger.py:42] Received request cmpl-614adecf029c4d55b4c4b06bb0aded10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:25 [async_llm.py:261] Added request cmpl-614adecf029c4d55b4c4b06bb0aded10-0.
INFO 03-02 00:24:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:26 [logger.py:42] Received request cmpl-1d20017f2a0340a39b1679ec7d2a9605-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:26 [async_llm.py:261] Added request cmpl-1d20017f2a0340a39b1679ec7d2a9605-0.
INFO 03-02 00:24:27 [logger.py:42] Received request cmpl-3ee9891bfa5c49cc940054c230cda336-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:27 [async_llm.py:261] Added request cmpl-3ee9891bfa5c49cc940054c230cda336-0.
INFO 03-02 00:24:28 [logger.py:42] Received request cmpl-121dbf2d184144a7a29cb35f7c03e564-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:28 [async_llm.py:261] Added request cmpl-121dbf2d184144a7a29cb35f7c03e564-0.
INFO 03-02 00:24:29 [logger.py:42] Received request cmpl-4cb816394b6e404c862c9a16dd1992fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:29 [async_llm.py:261] Added request cmpl-4cb816394b6e404c862c9a16dd1992fa-0.
INFO 03-02 00:24:30 [logger.py:42] Received request cmpl-81b00275f9214185878797a55684a810-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:30 [async_llm.py:261] Added request cmpl-81b00275f9214185878797a55684a810-0.
INFO 03-02 00:24:31 [logger.py:42] Received request cmpl-b2784e2463e643fbb52db0293fc8ccb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:31 [async_llm.py:261] Added request cmpl-b2784e2463e643fbb52db0293fc8ccb9-0.
INFO 03-02 00:24:33 [logger.py:42] Received request cmpl-30a01dd9c155477f9a0dc6df3fab28c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:33 [async_llm.py:261] Added request cmpl-30a01dd9c155477f9a0dc6df3fab28c7-0.
INFO 03-02 00:24:34 [logger.py:42] Received request cmpl-8d9a679b51494f1fa4bd4072d18bcfdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:34 [async_llm.py:261] Added request cmpl-8d9a679b51494f1fa4bd4072d18bcfdb-0.
INFO 03-02 00:24:35 [logger.py:42] Received request cmpl-8a08c730c6ff4c14bed30553dea73ba3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:35 [async_llm.py:261] Added request cmpl-8a08c730c6ff4c14bed30553dea73ba3-0.
INFO 03-02 00:24:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:36 [logger.py:42] Received request cmpl-94acc307bc3848e5b366a974730a4a24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:36 [async_llm.py:261] Added request cmpl-94acc307bc3848e5b366a974730a4a24-0.
INFO 03-02 00:24:37 [logger.py:42] Received request cmpl-538bca403eb2470a9ebbb63bd7279f51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:37 [async_llm.py:261] Added request cmpl-538bca403eb2470a9ebbb63bd7279f51-0.
INFO 03-02 00:24:38 [logger.py:42] Received request cmpl-8b35362f2c9648118b0f0683f60460f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:38 [async_llm.py:261] Added request cmpl-8b35362f2c9648118b0f0683f60460f9-0.
INFO 03-02 00:24:39 [logger.py:42] Received request cmpl-f5df10335b944b1b981b7d2df6bc1a49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:39 [async_llm.py:261] Added request cmpl-f5df10335b944b1b981b7d2df6bc1a49-0.
INFO 03-02 00:24:40 [logger.py:42] Received request cmpl-fe1ec091a97b427d9c381058ca91a92f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:40 [async_llm.py:261] Added request cmpl-fe1ec091a97b427d9c381058ca91a92f-0.
INFO 03-02 00:24:41 [logger.py:42] Received request cmpl-01995e2690c14818910c8e8d78b30fb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:41 [async_llm.py:261] Added request cmpl-01995e2690c14818910c8e8d78b30fb0-0.
INFO 03-02 00:24:42 [logger.py:42] Received request cmpl-a6e3a30eb5dc4e149dc4ab93fb03bce5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:42 [async_llm.py:261] Added request cmpl-a6e3a30eb5dc4e149dc4ab93fb03bce5-0.
INFO 03-02 00:24:43 [logger.py:42] Received request cmpl-9cd9e2dcb65341689b603570799183fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:43 [async_llm.py:261] Added request cmpl-9cd9e2dcb65341689b603570799183fa-0.
INFO 03-02 00:24:45 [logger.py:42] Received request cmpl-ef21f33a9f7748b19df2e7195957ae4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:45 [async_llm.py:261] Added request cmpl-ef21f33a9f7748b19df2e7195957ae4a-0.
INFO 03-02 00:24:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:46 [logger.py:42] Received request cmpl-033267662f494da181452527bf7ea80c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:46 [async_llm.py:261] Added request cmpl-033267662f494da181452527bf7ea80c-0.
INFO 03-02 00:24:47 [logger.py:42] Received request cmpl-95f1924cdbc447a38f31ff7a5f5ac126-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:47 [async_llm.py:261] Added request cmpl-95f1924cdbc447a38f31ff7a5f5ac126-0.
INFO 03-02 00:24:48 [logger.py:42] Received request cmpl-da3926beb4cd496686a63755d7778194-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:48 [async_llm.py:261] Added request cmpl-da3926beb4cd496686a63755d7778194-0.
INFO 03-02 00:24:49 [logger.py:42] Received request cmpl-1cf2caf6e5ba419a8ada5e9f57f1b702-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:49 [async_llm.py:261] Added request cmpl-1cf2caf6e5ba419a8ada5e9f57f1b702-0.
INFO 03-02 00:24:50 [logger.py:42] Received request cmpl-3bfb0144a9b34558b4bd6638859e9193-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:50 [async_llm.py:261] Added request cmpl-3bfb0144a9b34558b4bd6638859e9193-0.
INFO 03-02 00:24:51 [logger.py:42] Received request cmpl-2b71cf13d02f415d8dbb94cdcaa95db3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:51 [async_llm.py:261] Added request cmpl-2b71cf13d02f415d8dbb94cdcaa95db3-0.
INFO 03-02 00:24:52 [logger.py:42] Received request cmpl-e241c5c27d774779a4296861ae989f55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:52 [async_llm.py:261] Added request cmpl-e241c5c27d774779a4296861ae989f55-0.
INFO 03-02 00:24:53 [logger.py:42] Received request cmpl-8e48a4f7830f45cbaaab55891f698737-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:53 [async_llm.py:261] Added request cmpl-8e48a4f7830f45cbaaab55891f698737-0.
INFO 03-02 00:24:54 [logger.py:42] Received request cmpl-60b1ac4cd87e4a00ac5f923eb15f617a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:54 [async_llm.py:261] Added request cmpl-60b1ac4cd87e4a00ac5f923eb15f617a-0.
INFO 03-02 00:24:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:24:56 [logger.py:42] Received request cmpl-fc338762f1754aa1a2224cffa6bd3da5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:56 [async_llm.py:261] Added request cmpl-fc338762f1754aa1a2224cffa6bd3da5-0.
INFO 03-02 00:24:57 [logger.py:42] Received request cmpl-7dec22aed517485f86604f6963983670-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:57 [async_llm.py:261] Added request cmpl-7dec22aed517485f86604f6963983670-0.
INFO 03-02 00:24:58 [logger.py:42] Received request cmpl-b37d0b7bf7104222af251991dc46831e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:58 [async_llm.py:261] Added request cmpl-b37d0b7bf7104222af251991dc46831e-0.
INFO 03-02 00:24:59 [logger.py:42] Received request cmpl-3585962777344a99bc91892b45e0cc83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:24:59 [async_llm.py:261] Added request cmpl-3585962777344a99bc91892b45e0cc83-0.
INFO 03-02 00:25:00 [logger.py:42] Received request cmpl-4573c85dc7dd4f358e2f336a82fc643f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:00 [async_llm.py:261] Added request cmpl-4573c85dc7dd4f358e2f336a82fc643f-0.
INFO 03-02 00:25:01 [logger.py:42] Received request cmpl-edfdac8cd03d44619c226a4ca2984433-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:01 [async_llm.py:261] Added request cmpl-edfdac8cd03d44619c226a4ca2984433-0.
INFO 03-02 00:25:02 [logger.py:42] Received request cmpl-9d021b974972431c823686edac2f89a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:02 [async_llm.py:261] Added request cmpl-9d021b974972431c823686edac2f89a9-0.
INFO 03-02 00:25:03 [logger.py:42] Received request cmpl-56f14472b2884b718f02430ac67b927f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:03 [async_llm.py:261] Added request cmpl-56f14472b2884b718f02430ac67b927f-0.
INFO 03-02 00:25:04 [logger.py:42] Received request cmpl-01766118b8284dc59647eedbba4647c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:04 [async_llm.py:261] Added request cmpl-01766118b8284dc59647eedbba4647c5-0.
INFO 03-02 00:25:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:05 [logger.py:42] Received request cmpl-1b7925b39e4f4192a26743a6f7b4a507-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:05 [async_llm.py:261] Added request cmpl-1b7925b39e4f4192a26743a6f7b4a507-0.
INFO 03-02 00:25:06 [logger.py:42] Received request cmpl-cdccb44bfa584e6488c0f8c6a29c48a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:06 [async_llm.py:261] Added request cmpl-cdccb44bfa584e6488c0f8c6a29c48a7-0.
INFO 03-02 00:25:08 [logger.py:42] Received request cmpl-43d3a1280d9446c19c45574103b8f8c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:08 [async_llm.py:261] Added request cmpl-43d3a1280d9446c19c45574103b8f8c0-0.
INFO 03-02 00:25:09 [logger.py:42] Received request cmpl-67284c6b802e43eb9d801036fc09e8a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:09 [async_llm.py:261] Added request cmpl-67284c6b802e43eb9d801036fc09e8a2-0.
INFO 03-02 00:25:10 [logger.py:42] Received request cmpl-ecb8fc52e93a49ad92f00896bcd84a48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:10 [async_llm.py:261] Added request cmpl-ecb8fc52e93a49ad92f00896bcd84a48-0.
INFO 03-02 00:25:11 [logger.py:42] Received request cmpl-51e1f9551de041b6837d6a5b13349b8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:11 [async_llm.py:261] Added request cmpl-51e1f9551de041b6837d6a5b13349b8c-0.
INFO 03-02 00:25:12 [logger.py:42] Received request cmpl-fb74e514bf4241ecbd1697a35356c6ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:12 [async_llm.py:261] Added request cmpl-fb74e514bf4241ecbd1697a35356c6ff-0.
INFO 03-02 00:25:13 [logger.py:42] Received request cmpl-090ab1fbb1ab4ec9bf01ace2b98b6e5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:13 [async_llm.py:261] Added request cmpl-090ab1fbb1ab4ec9bf01ace2b98b6e5b-0.
INFO 03-02 00:25:14 [logger.py:42] Received request cmpl-622085d6d4254be7aa5ab1e894016169-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:14 [async_llm.py:261] Added request cmpl-622085d6d4254be7aa5ab1e894016169-0.
INFO 03-02 00:25:15 [logger.py:42] Received request cmpl-c5488146e0324269b1bc0f7da7288a13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:15 [async_llm.py:261] Added request cmpl-c5488146e0324269b1bc0f7da7288a13-0.
INFO 03-02 00:25:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:16 [logger.py:42] Received request cmpl-e2a417a3ee0d4f68bb35a710a88c8c8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:16 [async_llm.py:261] Added request cmpl-e2a417a3ee0d4f68bb35a710a88c8c8c-0.
INFO 03-02 00:25:17 [logger.py:42] Received request cmpl-0d390c3e94ad4c0bb84d287625eadb06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:17 [async_llm.py:261] Added request cmpl-0d390c3e94ad4c0bb84d287625eadb06-0.
INFO 03-02 00:25:18 [logger.py:42] Received request cmpl-9cd18b91403640889be4a786550bfe57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:18 [async_llm.py:261] Added request cmpl-9cd18b91403640889be4a786550bfe57-0.
INFO 03-02 00:25:20 [logger.py:42] Received request cmpl-98e4f243605b4558b9300f06942066c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:20 [async_llm.py:261] Added request cmpl-98e4f243605b4558b9300f06942066c5-0.
INFO 03-02 00:25:21 [logger.py:42] Received request cmpl-c5bb6be45d404f64a686fa1b3aee8ebe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:21 [async_llm.py:261] Added request cmpl-c5bb6be45d404f64a686fa1b3aee8ebe-0.
INFO 03-02 00:25:22 [logger.py:42] Received request cmpl-21fadaf2bcfd4f598447f7bb3c71c995-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:22 [async_llm.py:261] Added request cmpl-21fadaf2bcfd4f598447f7bb3c71c995-0.
INFO 03-02 00:25:23 [logger.py:42] Received request cmpl-22e843562d214d72a5c739682185e00f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:23 [async_llm.py:261] Added request cmpl-22e843562d214d72a5c739682185e00f-0.
INFO 03-02 00:25:24 [logger.py:42] Received request cmpl-af1d324501a947d1aa62d7ffc037d6f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:24 [async_llm.py:261] Added request cmpl-af1d324501a947d1aa62d7ffc037d6f0-0.
INFO 03-02 00:25:25 [logger.py:42] Received request cmpl-54968de40ef741b49cb8af79083388ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:25 [async_llm.py:261] Added request cmpl-54968de40ef741b49cb8af79083388ea-0.
INFO 03-02 00:25:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:26 [logger.py:42] Received request cmpl-2aec4d35dd554e1eb183b5c41b65b8e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:26 [async_llm.py:261] Added request cmpl-2aec4d35dd554e1eb183b5c41b65b8e3-0.
INFO 03-02 00:25:27 [logger.py:42] Received request cmpl-115435b6c41a4e0c98fbde1f79b849d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:27 [async_llm.py:261] Added request cmpl-115435b6c41a4e0c98fbde1f79b849d1-0.
INFO 03-02 00:25:28 [logger.py:42] Received request cmpl-15f7a2491b3b4df0b4e4469eb8ba4ce1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:28 [async_llm.py:261] Added request cmpl-15f7a2491b3b4df0b4e4469eb8ba4ce1-0.
INFO 03-02 00:25:29 [logger.py:42] Received request cmpl-b9e4f38b738b46d2b3e3b661168ebbd0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:29 [async_llm.py:261] Added request cmpl-b9e4f38b738b46d2b3e3b661168ebbd0-0.
INFO 03-02 00:25:31 [logger.py:42] Received request cmpl-ec174e097e7e47668d728d27e592fb95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:31 [async_llm.py:261] Added request cmpl-ec174e097e7e47668d728d27e592fb95-0.
INFO 03-02 00:25:32 [logger.py:42] Received request cmpl-b7f03995e462449b8a26462b19e633a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:32 [async_llm.py:261] Added request cmpl-b7f03995e462449b8a26462b19e633a8-0.
INFO 03-02 00:25:33 [logger.py:42] Received request cmpl-1ac2a11e9023483ca7ee398f77d98ff7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:33 [async_llm.py:261] Added request cmpl-1ac2a11e9023483ca7ee398f77d98ff7-0.
INFO 03-02 00:25:34 [logger.py:42] Received request cmpl-897d124fec1d4687abffc96d914f8c09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:34 [async_llm.py:261] Added request cmpl-897d124fec1d4687abffc96d914f8c09-0.
INFO 03-02 00:25:35 [logger.py:42] Received request cmpl-26a907e9d4704d349e2080bd8b4dd736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:35 [async_llm.py:261] Added request cmpl-26a907e9d4704d349e2080bd8b4dd736-0.
INFO 03-02 00:25:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:36 [logger.py:42] Received request cmpl-10ac2f95c9b04ed38fd5ef5eb836974d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:36 [async_llm.py:261] Added request cmpl-10ac2f95c9b04ed38fd5ef5eb836974d-0.
INFO 03-02 00:25:37 [logger.py:42] Received request cmpl-47aaccef16514971a7811b99913b94b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:37 [async_llm.py:261] Added request cmpl-47aaccef16514971a7811b99913b94b6-0.
INFO 03-02 00:25:38 [logger.py:42] Received request cmpl-bd254554db5f44faa83f765a345dfd38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:38 [async_llm.py:261] Added request cmpl-bd254554db5f44faa83f765a345dfd38-0.
INFO 03-02 00:25:39 [logger.py:42] Received request cmpl-4ae84f8f41a44d3c827adfe658043e4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:39 [async_llm.py:261] Added request cmpl-4ae84f8f41a44d3c827adfe658043e4c-0.
INFO 03-02 00:25:40 [logger.py:42] Received request cmpl-8cb01097ed4f4db498d7cf1ab4bffdc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:40 [async_llm.py:261] Added request cmpl-8cb01097ed4f4db498d7cf1ab4bffdc3-0.
INFO 03-02 00:25:42 [logger.py:42] Received request cmpl-5f1994a65f0e414c9fa4d5df56c2e00b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:42 [async_llm.py:261] Added request cmpl-5f1994a65f0e414c9fa4d5df56c2e00b-0.
INFO 03-02 00:25:43 [logger.py:42] Received request cmpl-3c2849cf0153442585921fb5fea78e60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:43 [async_llm.py:261] Added request cmpl-3c2849cf0153442585921fb5fea78e60-0.
INFO 03-02 00:25:44 [logger.py:42] Received request cmpl-62262cdc6e5343ca96d3f8b171c5c245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:44 [async_llm.py:261] Added request cmpl-62262cdc6e5343ca96d3f8b171c5c245-0.
INFO 03-02 00:25:45 [logger.py:42] Received request cmpl-cd63b8a0ab0c444ea3edc4473c4fa23b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:45 [async_llm.py:261] Added request cmpl-cd63b8a0ab0c444ea3edc4473c4fa23b-0.
INFO 03-02 00:25:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:46 [logger.py:42] Received request cmpl-967e9a6431fc4c0ba0b731b505aa6f0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:46 [async_llm.py:261] Added request cmpl-967e9a6431fc4c0ba0b731b505aa6f0b-0.
INFO 03-02 00:25:47 [logger.py:42] Received request cmpl-60b9e4853b7f4a9e99da641318522e24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:47 [async_llm.py:261] Added request cmpl-60b9e4853b7f4a9e99da641318522e24-0.
INFO 03-02 00:25:48 [logger.py:42] Received request cmpl-c7bdd2039ddb4c5a9eb58a1a56fa7ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:48 [async_llm.py:261] Added request cmpl-c7bdd2039ddb4c5a9eb58a1a56fa7ecb-0.
INFO 03-02 00:25:49 [logger.py:42] Received request cmpl-7950649569f84a28bc86d31e39b3b8b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:49 [async_llm.py:261] Added request cmpl-7950649569f84a28bc86d31e39b3b8b5-0.
INFO 03-02 00:25:50 [logger.py:42] Received request cmpl-26ec3608a946422ca7c26b2364e560e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:50 [async_llm.py:261] Added request cmpl-26ec3608a946422ca7c26b2364e560e1-0.
INFO 03-02 00:25:51 [logger.py:42] Received request cmpl-7994f1492e244707885027410281f94a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:51 [async_llm.py:261] Added request cmpl-7994f1492e244707885027410281f94a-0.
INFO 03-02 00:25:52 [logger.py:42] Received request cmpl-58f403b005564e9d96ec8cac3eea1a74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:52 [async_llm.py:261] Added request cmpl-58f403b005564e9d96ec8cac3eea1a74-0.
INFO 03-02 00:25:54 [logger.py:42] Received request cmpl-81ad7519d0d94b5c8ff2427a9d82691f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:54 [async_llm.py:261] Added request cmpl-81ad7519d0d94b5c8ff2427a9d82691f-0.
INFO 03-02 00:25:55 [logger.py:42] Received request cmpl-22dc8f9f0ba543a69f4c2691a0ab6407-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:55 [async_llm.py:261] Added request cmpl-22dc8f9f0ba543a69f4c2691a0ab6407-0.
INFO 03-02 00:25:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:25:56 [logger.py:42] Received request cmpl-27e76acb379e4013b33a7d6fab1cd265-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:56 [async_llm.py:261] Added request cmpl-27e76acb379e4013b33a7d6fab1cd265-0.
INFO 03-02 00:25:57 [logger.py:42] Received request cmpl-f228b9ebca8545d08f923dc03930958e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:57 [async_llm.py:261] Added request cmpl-f228b9ebca8545d08f923dc03930958e-0.
INFO 03-02 00:25:58 [logger.py:42] Received request cmpl-37604cc34afe480dafc76958ca47fda5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:58 [async_llm.py:261] Added request cmpl-37604cc34afe480dafc76958ca47fda5-0.
INFO 03-02 00:25:59 [logger.py:42] Received request cmpl-fbcb3443b1524eb39d6a2205d424990d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:25:59 [async_llm.py:261] Added request cmpl-fbcb3443b1524eb39d6a2205d424990d-0.
INFO 03-02 00:26:00 [logger.py:42] Received request cmpl-f6b2998ee3ed4ba58f951371823a6655-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:00 [async_llm.py:261] Added request cmpl-f6b2998ee3ed4ba58f951371823a6655-0.
INFO 03-02 00:26:01 [logger.py:42] Received request cmpl-7d48192726434807af52d0f51e355af4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:01 [async_llm.py:261] Added request cmpl-7d48192726434807af52d0f51e355af4-0.
INFO 03-02 00:26:02 [logger.py:42] Received request cmpl-aea955ac430d4b7fbadce151f2a4bfed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:02 [async_llm.py:261] Added request cmpl-aea955ac430d4b7fbadce151f2a4bfed-0.
INFO 03-02 00:26:03 [logger.py:42] Received request cmpl-4c91b8425eb140baa49d631458b3b4d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:03 [async_llm.py:261] Added request cmpl-4c91b8425eb140baa49d631458b3b4d2-0.
INFO 03-02 00:26:04 [logger.py:42] Received request cmpl-b75681c752794a718e2aa69eff777404-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:04 [async_llm.py:261] Added request cmpl-b75681c752794a718e2aa69eff777404-0.
INFO 03-02 00:26:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:06 [logger.py:42] Received request cmpl-01e80b7c79c14ba199af12c179f93a27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:06 [async_llm.py:261] Added request cmpl-01e80b7c79c14ba199af12c179f93a27-0.
INFO 03-02 00:26:07 [logger.py:42] Received request cmpl-3c07694deff643e8b0c379fd3995597d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:07 [async_llm.py:261] Added request cmpl-3c07694deff643e8b0c379fd3995597d-0.
INFO 03-02 00:26:08 [logger.py:42] Received request cmpl-b7d539ce20e64a73939939bc87359609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:08 [async_llm.py:261] Added request cmpl-b7d539ce20e64a73939939bc87359609-0.
INFO 03-02 00:26:09 [logger.py:42] Received request cmpl-841b2c79e74a464d85b42a9b5412a760-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:09 [async_llm.py:261] Added request cmpl-841b2c79e74a464d85b42a9b5412a760-0.
INFO 03-02 00:26:10 [logger.py:42] Received request cmpl-fa3c3bf7da03421cbc2829d263600e1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:10 [async_llm.py:261] Added request cmpl-fa3c3bf7da03421cbc2829d263600e1e-0.
INFO 03-02 00:26:11 [logger.py:42] Received request cmpl-28efc884d5bd4605845ee480daa12c4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:11 [async_llm.py:261] Added request cmpl-28efc884d5bd4605845ee480daa12c4c-0.
INFO 03-02 00:26:12 [logger.py:42] Received request cmpl-8628bd318b304eb5ac39ed84dfafc67e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:12 [async_llm.py:261] Added request cmpl-8628bd318b304eb5ac39ed84dfafc67e-0.
INFO 03-02 00:26:13 [logger.py:42] Received request cmpl-cd267c0bbaa144218aa222a6bb62b44a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:13 [async_llm.py:261] Added request cmpl-cd267c0bbaa144218aa222a6bb62b44a-0.
INFO 03-02 00:26:14 [logger.py:42] Received request cmpl-cf5d0427091c4d899339a8ae87949e6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:14 [async_llm.py:261] Added request cmpl-cf5d0427091c4d899339a8ae87949e6c-0.
INFO 03-02 00:26:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:15 [logger.py:42] Received request cmpl-bec3509f04c74003b75af0cbad89ae43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:15 [async_llm.py:261] Added request cmpl-bec3509f04c74003b75af0cbad89ae43-0.
INFO 03-02 00:26:17 [logger.py:42] Received request cmpl-29715c38a5514f94a173e23fcdfe6dbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:17 [async_llm.py:261] Added request cmpl-29715c38a5514f94a173e23fcdfe6dbe-0.
INFO 03-02 00:26:18 [logger.py:42] Received request cmpl-da4775eaebf8487cbfdc9f9262383bdf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:18 [async_llm.py:261] Added request cmpl-da4775eaebf8487cbfdc9f9262383bdf-0.
INFO 03-02 00:26:19 [logger.py:42] Received request cmpl-d9d73487e1734a8190f9c6f88880aee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:19 [async_llm.py:261] Added request cmpl-d9d73487e1734a8190f9c6f88880aee2-0.
INFO 03-02 00:26:20 [logger.py:42] Received request cmpl-8ad6088d9129464b85cc34abd12968b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:20 [async_llm.py:261] Added request cmpl-8ad6088d9129464b85cc34abd12968b4-0.
INFO 03-02 00:26:21 [logger.py:42] Received request cmpl-5e73a60cc9f24ec1bb3fa6148f109ec6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:21 [async_llm.py:261] Added request cmpl-5e73a60cc9f24ec1bb3fa6148f109ec6-0.
INFO 03-02 00:26:22 [logger.py:42] Received request cmpl-3194a8bb9a5a42318f596fea439d4748-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:22 [async_llm.py:261] Added request cmpl-3194a8bb9a5a42318f596fea439d4748-0.
INFO 03-02 00:26:23 [logger.py:42] Received request cmpl-4c39f2055bb2400394668557efc6cb31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:23 [async_llm.py:261] Added request cmpl-4c39f2055bb2400394668557efc6cb31-0.
INFO 03-02 00:26:24 [logger.py:42] Received request cmpl-4f9d26fef38a4c348857a9912c524650-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:24 [async_llm.py:261] Added request cmpl-4f9d26fef38a4c348857a9912c524650-0.
INFO 03-02 00:26:25 [logger.py:42] Received request cmpl-d2a2d08aafc346ceb1cb83ac24a5dbee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:25 [async_llm.py:261] Added request cmpl-d2a2d08aafc346ceb1cb83ac24a5dbee-0.
INFO 03-02 00:26:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:26 [logger.py:42] Received request cmpl-9f07a432e61e4470ad671cc44c6a75ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:26 [async_llm.py:261] Added request cmpl-9f07a432e61e4470ad671cc44c6a75ba-0.
INFO 03-02 00:26:27 [logger.py:42] Received request cmpl-6e735601f1064f1d9f3ae6258ab73b71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:27 [async_llm.py:261] Added request cmpl-6e735601f1064f1d9f3ae6258ab73b71-0.
INFO 03-02 00:26:29 [logger.py:42] Received request cmpl-9bc14ca694494b6fa13eb10a6b4b25b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:29 [async_llm.py:261] Added request cmpl-9bc14ca694494b6fa13eb10a6b4b25b6-0.
INFO 03-02 00:26:30 [logger.py:42] Received request cmpl-5b00627d26884d748e2181d47d157b2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:30 [async_llm.py:261] Added request cmpl-5b00627d26884d748e2181d47d157b2b-0.
INFO 03-02 00:26:31 [logger.py:42] Received request cmpl-f41a8c1e5994493ba55411819d77af1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:31 [async_llm.py:261] Added request cmpl-f41a8c1e5994493ba55411819d77af1c-0.
INFO 03-02 00:26:32 [logger.py:42] Received request cmpl-3e7a7e25e8d041f7916cad4bb54b7c39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:32 [async_llm.py:261] Added request cmpl-3e7a7e25e8d041f7916cad4bb54b7c39-0.
INFO 03-02 00:26:33 [logger.py:42] Received request cmpl-5ac3827edcd84c59b03a9c44bcd4a019-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:33 [async_llm.py:261] Added request cmpl-5ac3827edcd84c59b03a9c44bcd4a019-0.
INFO 03-02 00:26:34 [logger.py:42] Received request cmpl-65d5d226c35048dfa40c52d1b725e583-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:34 [async_llm.py:261] Added request cmpl-65d5d226c35048dfa40c52d1b725e583-0.
INFO 03-02 00:26:35 [logger.py:42] Received request cmpl-eb14e8f205414bc6a9b31090157c775d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:35 [async_llm.py:261] Added request cmpl-eb14e8f205414bc6a9b31090157c775d-0.
INFO 03-02 00:26:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:36 [logger.py:42] Received request cmpl-e9ef83b4686f48a3ad79962d7f96eded-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:36 [async_llm.py:261] Added request cmpl-e9ef83b4686f48a3ad79962d7f96eded-0.
INFO 03-02 00:26:37 [logger.py:42] Received request cmpl-ad2efed3e9ff4dcdaa3c7f8665c07b60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:37 [async_llm.py:261] Added request cmpl-ad2efed3e9ff4dcdaa3c7f8665c07b60-0.
INFO 03-02 00:26:38 [logger.py:42] Received request cmpl-eac2ce69f5974d0d922793bed9a0d9c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:38 [async_llm.py:261] Added request cmpl-eac2ce69f5974d0d922793bed9a0d9c6-0.
INFO 03-02 00:26:40 [logger.py:42] Received request cmpl-d5844cc8bc074bf3a13cf54f2f667196-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:40 [async_llm.py:261] Added request cmpl-d5844cc8bc074bf3a13cf54f2f667196-0.
INFO 03-02 00:26:41 [logger.py:42] Received request cmpl-48512cbfc13e4fb99883ef81dd5b630c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:41 [async_llm.py:261] Added request cmpl-48512cbfc13e4fb99883ef81dd5b630c-0.
INFO 03-02 00:26:42 [logger.py:42] Received request cmpl-1770ff369a5d4cd881d75478b0a6c24d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:42 [async_llm.py:261] Added request cmpl-1770ff369a5d4cd881d75478b0a6c24d-0.
INFO 03-02 00:26:43 [logger.py:42] Received request cmpl-e542da3007464b919d71e3e57774e3ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:43 [async_llm.py:261] Added request cmpl-e542da3007464b919d71e3e57774e3ee-0.
INFO 03-02 00:26:44 [logger.py:42] Received request cmpl-db1ec37f23834b548b8892a59fba0bdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:44 [async_llm.py:261] Added request cmpl-db1ec37f23834b548b8892a59fba0bdb-0.
INFO 03-02 00:26:45 [logger.py:42] Received request cmpl-48c600cd3fe14e579d128177fe192b51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:45 [async_llm.py:261] Added request cmpl-48c600cd3fe14e579d128177fe192b51-0.
INFO 03-02 00:26:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:46 [logger.py:42] Received request cmpl-d3fb1f19f25d4490b0942ce54463e90a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:46 [async_llm.py:261] Added request cmpl-d3fb1f19f25d4490b0942ce54463e90a-0.
INFO 03-02 00:26:47 [logger.py:42] Received request cmpl-2c33b3a457374c1ba05ccf9c2217e8f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:47 [async_llm.py:261] Added request cmpl-2c33b3a457374c1ba05ccf9c2217e8f3-0.
INFO 03-02 00:26:48 [logger.py:42] Received request cmpl-321b225c320244c1971850c89e07eadb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:48 [async_llm.py:261] Added request cmpl-321b225c320244c1971850c89e07eadb-0.
INFO 03-02 00:26:49 [logger.py:42] Received request cmpl-b7940681c6a444a8be0511a1d346ded5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:49 [async_llm.py:261] Added request cmpl-b7940681c6a444a8be0511a1d346ded5-0.
INFO 03-02 00:26:50 [logger.py:42] Received request cmpl-f39aee126691449d823fd8058edc0763-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:50 [async_llm.py:261] Added request cmpl-f39aee126691449d823fd8058edc0763-0.
INFO 03-02 00:26:52 [logger.py:42] Received request cmpl-693d250c632b4e49a6abc879a64cb960-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:52 [async_llm.py:261] Added request cmpl-693d250c632b4e49a6abc879a64cb960-0.
INFO 03-02 00:26:53 [logger.py:42] Received request cmpl-60fde09c060442c69935a320c20da148-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:53 [async_llm.py:261] Added request cmpl-60fde09c060442c69935a320c20da148-0.
INFO 03-02 00:26:54 [logger.py:42] Received request cmpl-d14143bb2545468782b6ba1a70d927da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:54 [async_llm.py:261] Added request cmpl-d14143bb2545468782b6ba1a70d927da-0.
INFO 03-02 00:26:55 [logger.py:42] Received request cmpl-d79123320a2247f28d94b3539bc62183-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:55 [async_llm.py:261] Added request cmpl-d79123320a2247f28d94b3539bc62183-0.
INFO 03-02 00:26:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:26:56 [logger.py:42] Received request cmpl-68edf6acdced44fc855f845010cb0aec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:56 [async_llm.py:261] Added request cmpl-68edf6acdced44fc855f845010cb0aec-0.
INFO 03-02 00:26:57 [logger.py:42] Received request cmpl-0f331d4030194059882ce450ab747db5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:57 [async_llm.py:261] Added request cmpl-0f331d4030194059882ce450ab747db5-0.
INFO 03-02 00:26:58 [logger.py:42] Received request cmpl-f2488afbd9fa4f8aa20903ec2d955bb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:58 [async_llm.py:261] Added request cmpl-f2488afbd9fa4f8aa20903ec2d955bb8-0.
INFO 03-02 00:26:59 [logger.py:42] Received request cmpl-c12cac570ba24ee7b71f697f5c5126b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:26:59 [async_llm.py:261] Added request cmpl-c12cac570ba24ee7b71f697f5c5126b8-0.
INFO 03-02 00:27:00 [logger.py:42] Received request cmpl-996fbe64bb154ebfa2623f33e916c8a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:00 [async_llm.py:261] Added request cmpl-996fbe64bb154ebfa2623f33e916c8a3-0.
INFO 03-02 00:27:01 [logger.py:42] Received request cmpl-018b5d35db204650b831e5198b84287b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:01 [async_llm.py:261] Added request cmpl-018b5d35db204650b831e5198b84287b-0.
INFO 03-02 00:27:03 [logger.py:42] Received request cmpl-367be4d2657b40e18c071417937d40b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:03 [async_llm.py:261] Added request cmpl-367be4d2657b40e18c071417937d40b2-0.
INFO 03-02 00:27:04 [logger.py:42] Received request cmpl-c3f1872b58d64e718d0ac7d7699052f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:04 [async_llm.py:261] Added request cmpl-c3f1872b58d64e718d0ac7d7699052f4-0.
INFO 03-02 00:27:05 [logger.py:42] Received request cmpl-546458acbceb49a1a690a96b6f430d35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:05 [async_llm.py:261] Added request cmpl-546458acbceb49a1a690a96b6f430d35-0.
INFO 03-02 00:27:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:06 [logger.py:42] Received request cmpl-c20fc8fe27cd43c3a4c20d92cbc21aa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:06 [async_llm.py:261] Added request cmpl-c20fc8fe27cd43c3a4c20d92cbc21aa6-0.
INFO 03-02 00:27:07 [logger.py:42] Received request cmpl-9e89218227b740e8935b73ccbb26719f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:07 [async_llm.py:261] Added request cmpl-9e89218227b740e8935b73ccbb26719f-0.
INFO 03-02 00:27:08 [logger.py:42] Received request cmpl-a472a63490c7490984e98a9b5189938f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:08 [async_llm.py:261] Added request cmpl-a472a63490c7490984e98a9b5189938f-0.
INFO 03-02 00:27:09 [logger.py:42] Received request cmpl-d8c199a17af348ad80a9240d01aa5763-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:09 [async_llm.py:261] Added request cmpl-d8c199a17af348ad80a9240d01aa5763-0.
INFO 03-02 00:27:10 [logger.py:42] Received request cmpl-16644b5f6b1841dfbdc47a7d9c413df8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:10 [async_llm.py:261] Added request cmpl-16644b5f6b1841dfbdc47a7d9c413df8-0.
INFO 03-02 00:27:11 [logger.py:42] Received request cmpl-2153719ddba2416a9cea98ff7885dc50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:11 [async_llm.py:261] Added request cmpl-2153719ddba2416a9cea98ff7885dc50-0.
INFO 03-02 00:27:12 [logger.py:42] Received request cmpl-1a7ddf47fdfe48fb9ac4b9a6d310689f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:12 [async_llm.py:261] Added request cmpl-1a7ddf47fdfe48fb9ac4b9a6d310689f-0.
INFO 03-02 00:27:13 [logger.py:42] Received request cmpl-280466a6fe0a4bc6ab04fdbfee3eb1fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:13 [async_llm.py:261] Added request cmpl-280466a6fe0a4bc6ab04fdbfee3eb1fc-0.
INFO 03-02 00:27:15 [logger.py:42] Received request cmpl-96bf09a76a6147d29796e3854a5438b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:15 [async_llm.py:261] Added request cmpl-96bf09a76a6147d29796e3854a5438b1-0.
INFO 03-02 00:27:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:16 [logger.py:42] Received request cmpl-453ff319e3a6438db9db3deefa66f254-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:16 [async_llm.py:261] Added request cmpl-453ff319e3a6438db9db3deefa66f254-0.
INFO 03-02 00:27:17 [logger.py:42] Received request cmpl-7836be17ae4c48d4bca5539ee63e99f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:17 [async_llm.py:261] Added request cmpl-7836be17ae4c48d4bca5539ee63e99f5-0.
INFO 03-02 00:27:18 [logger.py:42] Received request cmpl-b092e02226d54a1abec1218154b67270-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:18 [async_llm.py:261] Added request cmpl-b092e02226d54a1abec1218154b67270-0.
INFO 03-02 00:27:19 [logger.py:42] Received request cmpl-874a6dc4245043839b2a51bf1965fdcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:19 [async_llm.py:261] Added request cmpl-874a6dc4245043839b2a51bf1965fdcc-0.
INFO 03-02 00:27:20 [logger.py:42] Received request cmpl-82feb7f43ffa44fabfe88614dc25c28a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:20 [async_llm.py:261] Added request cmpl-82feb7f43ffa44fabfe88614dc25c28a-0.
INFO 03-02 00:27:21 [logger.py:42] Received request cmpl-d862e0ca049a4facad8f595b6173cd76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:21 [async_llm.py:261] Added request cmpl-d862e0ca049a4facad8f595b6173cd76-0.
INFO 03-02 00:27:22 [logger.py:42] Received request cmpl-a1922617777e4337afb815d955998a7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:22 [async_llm.py:261] Added request cmpl-a1922617777e4337afb815d955998a7b-0.
INFO 03-02 00:27:23 [logger.py:42] Received request cmpl-83731345dae54d5c933859384d53c8fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:23 [async_llm.py:261] Added request cmpl-83731345dae54d5c933859384d53c8fd-0.
INFO 03-02 00:27:24 [logger.py:42] Received request cmpl-aec35db4ed98484b8b78672acbee0d87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:24 [async_llm.py:261] Added request cmpl-aec35db4ed98484b8b78672acbee0d87-0.
INFO 03-02 00:27:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:26 [logger.py:42] Received request cmpl-a1e44063b0a44b7fa2de88250d5993b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:26 [async_llm.py:261] Added request cmpl-a1e44063b0a44b7fa2de88250d5993b4-0.
INFO 03-02 00:27:27 [logger.py:42] Received request cmpl-cd79d837530e4390950270c1fc3dcb7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:27 [async_llm.py:261] Added request cmpl-cd79d837530e4390950270c1fc3dcb7d-0.
INFO 03-02 00:27:28 [logger.py:42] Received request cmpl-1d04eca6b6c24329b53d18272b0dafaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:28 [async_llm.py:261] Added request cmpl-1d04eca6b6c24329b53d18272b0dafaf-0.
INFO 03-02 00:27:29 [logger.py:42] Received request cmpl-43cadd018a324e388f27d6dc723115c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:29 [async_llm.py:261] Added request cmpl-43cadd018a324e388f27d6dc723115c8-0.
INFO 03-02 00:27:30 [logger.py:42] Received request cmpl-1b541b092cf441119bb973c49a5a9148-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:30 [async_llm.py:261] Added request cmpl-1b541b092cf441119bb973c49a5a9148-0.
INFO 03-02 00:27:31 [logger.py:42] Received request cmpl-15153e90da7c49c4aa559e3d15c04eb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:31 [async_llm.py:261] Added request cmpl-15153e90da7c49c4aa559e3d15c04eb8-0.
INFO 03-02 00:27:32 [logger.py:42] Received request cmpl-64820a33d64c4b789a5bf734ea8b9e87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:32 [async_llm.py:261] Added request cmpl-64820a33d64c4b789a5bf734ea8b9e87-0.
INFO 03-02 00:27:33 [logger.py:42] Received request cmpl-cd7cf2cee31d42478b37d83718144b21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:33 [async_llm.py:261] Added request cmpl-cd7cf2cee31d42478b37d83718144b21-0.
INFO 03-02 00:27:34 [logger.py:42] Received request cmpl-74d0f0280fb5470a9fc961ef0f6a8033-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:34 [async_llm.py:261] Added request cmpl-74d0f0280fb5470a9fc961ef0f6a8033-0.
INFO 03-02 00:27:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:35 [logger.py:42] Received request cmpl-f89331e86a5d4acd91ce684b97c80023-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:35 [async_llm.py:261] Added request cmpl-f89331e86a5d4acd91ce684b97c80023-0.
INFO 03-02 00:27:36 [logger.py:42] Received request cmpl-9e183fabf7cf4e25afde9fb6d5ec3208-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:36 [async_llm.py:261] Added request cmpl-9e183fabf7cf4e25afde9fb6d5ec3208-0.
INFO 03-02 00:27:38 [logger.py:42] Received request cmpl-362b001b4e794743a8cebaed7d0913fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:38 [async_llm.py:261] Added request cmpl-362b001b4e794743a8cebaed7d0913fb-0.
INFO 03-02 00:27:39 [logger.py:42] Received request cmpl-b6d3093f76264ac493e2964a073f8146-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:39 [async_llm.py:261] Added request cmpl-b6d3093f76264ac493e2964a073f8146-0.
INFO 03-02 00:27:40 [logger.py:42] Received request cmpl-5df53abb5093419888b447b54c588910-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:40 [async_llm.py:261] Added request cmpl-5df53abb5093419888b447b54c588910-0.
INFO 03-02 00:27:41 [logger.py:42] Received request cmpl-d1fb7b0d5cdd4240be48f31804ceb32f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:41 [async_llm.py:261] Added request cmpl-d1fb7b0d5cdd4240be48f31804ceb32f-0.
INFO 03-02 00:27:42 [logger.py:42] Received request cmpl-7cb923d086994ee7b2ecaa8df2497f30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:42 [async_llm.py:261] Added request cmpl-7cb923d086994ee7b2ecaa8df2497f30-0.
INFO 03-02 00:27:43 [logger.py:42] Received request cmpl-d5e1822232464c449ec94c73f7fde61a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:43 [async_llm.py:261] Added request cmpl-d5e1822232464c449ec94c73f7fde61a-0.
INFO 03-02 00:27:44 [logger.py:42] Received request cmpl-d2f54ffb53934d7f97432bbefde143e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:44 [async_llm.py:261] Added request cmpl-d2f54ffb53934d7f97432bbefde143e4-0.
INFO 03-02 00:27:45 [logger.py:42] Received request cmpl-524f18bd24f54b8686fb45a17551f53c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:45 [async_llm.py:261] Added request cmpl-524f18bd24f54b8686fb45a17551f53c-0.
INFO 03-02 00:27:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:46 [logger.py:42] Received request cmpl-427a6e56431a458a8f99a7ddefb017ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:46 [async_llm.py:261] Added request cmpl-427a6e56431a458a8f99a7ddefb017ab-0.
INFO 03-02 00:27:47 [logger.py:42] Received request cmpl-52ae3665c0ae4e67b15f366e8077807f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:47 [async_llm.py:261] Added request cmpl-52ae3665c0ae4e67b15f366e8077807f-0.
INFO 03-02 00:27:49 [logger.py:42] Received request cmpl-ade6fb5be9d242e5be91749421b9bf30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:49 [async_llm.py:261] Added request cmpl-ade6fb5be9d242e5be91749421b9bf30-0.
INFO 03-02 00:27:50 [logger.py:42] Received request cmpl-992a5e1435e546988f57a7a65f0a8795-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:50 [async_llm.py:261] Added request cmpl-992a5e1435e546988f57a7a65f0a8795-0.
INFO 03-02 00:27:51 [logger.py:42] Received request cmpl-4a58bff05beb46d5ac9d0fd91ff2a0b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:51 [async_llm.py:261] Added request cmpl-4a58bff05beb46d5ac9d0fd91ff2a0b8-0.
INFO 03-02 00:27:52 [logger.py:42] Received request cmpl-51990bfb11a6433192f0ee563efeaea9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:52 [async_llm.py:261] Added request cmpl-51990bfb11a6433192f0ee563efeaea9-0.
INFO 03-02 00:27:53 [logger.py:42] Received request cmpl-16e2cf32d9194118ba12fdffa3820705-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:53 [async_llm.py:261] Added request cmpl-16e2cf32d9194118ba12fdffa3820705-0.
INFO 03-02 00:27:54 [logger.py:42] Received request cmpl-842d70eebc9a4a679331059fa568fea7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:54 [async_llm.py:261] Added request cmpl-842d70eebc9a4a679331059fa568fea7-0.
INFO 03-02 00:27:55 [logger.py:42] Received request cmpl-1092f862c15a44a98972aa05a739810e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:55 [async_llm.py:261] Added request cmpl-1092f862c15a44a98972aa05a739810e-0.
INFO 03-02 00:27:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:27:56 [logger.py:42] Received request cmpl-74990a6d3aad49ab972687be60dc08df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:56 [async_llm.py:261] Added request cmpl-74990a6d3aad49ab972687be60dc08df-0.
INFO 03-02 00:27:57 [logger.py:42] Received request cmpl-e953d475ea3c49338001446832b930c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:57 [async_llm.py:261] Added request cmpl-e953d475ea3c49338001446832b930c0-0.
INFO 03-02 00:27:58 [logger.py:42] Received request cmpl-378fe21ea47540629bed5663462de4a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:58 [async_llm.py:261] Added request cmpl-378fe21ea47540629bed5663462de4a7-0.
INFO 03-02 00:27:59 [logger.py:42] Received request cmpl-61ffa4cd69bd44a28524c54bba055b4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:27:59 [async_llm.py:261] Added request cmpl-61ffa4cd69bd44a28524c54bba055b4e-0.
INFO 03-02 00:28:01 [logger.py:42] Received request cmpl-0f4ce6d69e7f450e87c9f406a83b7f96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:01 [async_llm.py:261] Added request cmpl-0f4ce6d69e7f450e87c9f406a83b7f96-0.
INFO 03-02 00:28:02 [logger.py:42] Received request cmpl-aaa9b5feb61c40528fe2322ff57624f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:02 [async_llm.py:261] Added request cmpl-aaa9b5feb61c40528fe2322ff57624f7-0.
INFO 03-02 00:28:03 [logger.py:42] Received request cmpl-1cdd36af02cd438bb7a7d3a899cd9892-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:03 [async_llm.py:261] Added request cmpl-1cdd36af02cd438bb7a7d3a899cd9892-0.
INFO 03-02 00:28:04 [logger.py:42] Received request cmpl-ba02147d23ef48ffaf4ff905f1c9d3c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:04 [async_llm.py:261] Added request cmpl-ba02147d23ef48ffaf4ff905f1c9d3c0-0.
INFO 03-02 00:28:05 [logger.py:42] Received request cmpl-14665121a51f47efa243f15007647350-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:05 [async_llm.py:261] Added request cmpl-14665121a51f47efa243f15007647350-0.
INFO 03-02 00:28:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:06 [logger.py:42] Received request cmpl-de0392606b0f4123a1976c55c80b4ae0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:06 [async_llm.py:261] Added request cmpl-de0392606b0f4123a1976c55c80b4ae0-0.
INFO 03-02 00:28:07 [logger.py:42] Received request cmpl-8ba1d3ef7425449cafd7afccee7e4a2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:07 [async_llm.py:261] Added request cmpl-8ba1d3ef7425449cafd7afccee7e4a2b-0.
INFO 03-02 00:28:08 [logger.py:42] Received request cmpl-0e0e9a2c39f84e19a222e21c596ad828-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:08 [async_llm.py:261] Added request cmpl-0e0e9a2c39f84e19a222e21c596ad828-0.
INFO 03-02 00:28:09 [logger.py:42] Received request cmpl-8ceec3b6972b4286a771bb3e8a9ee881-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:09 [async_llm.py:261] Added request cmpl-8ceec3b6972b4286a771bb3e8a9ee881-0.
INFO 03-02 00:28:11 [logger.py:42] Received request cmpl-4498a93063b549a2a31df98aa60e84ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:11 [async_llm.py:261] Added request cmpl-4498a93063b549a2a31df98aa60e84ad-0.
INFO 03-02 00:28:12 [logger.py:42] Received request cmpl-38edd417a7d346b286cd5a1d4dcf74ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:12 [async_llm.py:261] Added request cmpl-38edd417a7d346b286cd5a1d4dcf74ca-0.
INFO 03-02 00:28:13 [logger.py:42] Received request cmpl-4c5eaf828f064526bb74b6a0ae7e80b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:13 [async_llm.py:261] Added request cmpl-4c5eaf828f064526bb74b6a0ae7e80b1-0.
INFO 03-02 00:28:14 [logger.py:42] Received request cmpl-5c00ee9be94743e09e7f15309364d4be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:14 [async_llm.py:261] Added request cmpl-5c00ee9be94743e09e7f15309364d4be-0.
INFO 03-02 00:28:15 [logger.py:42] Received request cmpl-b17064538db740a98a252b288284cbf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:15 [async_llm.py:261] Added request cmpl-b17064538db740a98a252b288284cbf1-0.
INFO 03-02 00:28:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:16 [logger.py:42] Received request cmpl-7db60a4344b041f7a54dbda66b5ea9a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:16 [async_llm.py:261] Added request cmpl-7db60a4344b041f7a54dbda66b5ea9a0-0.
INFO 03-02 00:28:17 [logger.py:42] Received request cmpl-f78fb3871c00475c976d9ae7827eecb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:17 [async_llm.py:261] Added request cmpl-f78fb3871c00475c976d9ae7827eecb3-0.
INFO 03-02 00:28:18 [logger.py:42] Received request cmpl-a139a60e5e4b4b8fb7dc8a9171279527-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:18 [async_llm.py:261] Added request cmpl-a139a60e5e4b4b8fb7dc8a9171279527-0.
INFO 03-02 00:28:19 [logger.py:42] Received request cmpl-6d26485b64ee4777b4713599f5b9eb41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:19 [async_llm.py:261] Added request cmpl-6d26485b64ee4777b4713599f5b9eb41-0.
INFO 03-02 00:28:20 [logger.py:42] Received request cmpl-0eb47fd502ce402bb8ad97b8d1230fd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:20 [async_llm.py:261] Added request cmpl-0eb47fd502ce402bb8ad97b8d1230fd6-0.
INFO 03-02 00:28:21 [logger.py:42] Received request cmpl-31983a2c251a4b56b9e27484a21e20b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:21 [async_llm.py:261] Added request cmpl-31983a2c251a4b56b9e27484a21e20b3-0.
INFO 03-02 00:28:23 [logger.py:42] Received request cmpl-ff51268d17df45b0820ed546bb099c22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:23 [async_llm.py:261] Added request cmpl-ff51268d17df45b0820ed546bb099c22-0.
INFO 03-02 00:28:24 [logger.py:42] Received request cmpl-48edb64b26be4ed49a34a8ba324555d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:24 [async_llm.py:261] Added request cmpl-48edb64b26be4ed49a34a8ba324555d6-0.
INFO 03-02 00:28:25 [logger.py:42] Received request cmpl-16cf56eb866545cfbe26c1b612fd911c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:25 [async_llm.py:261] Added request cmpl-16cf56eb866545cfbe26c1b612fd911c-0.
INFO 03-02 00:28:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:26 [logger.py:42] Received request cmpl-f42b1a561edf4c3891689a832eff93cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:26 [async_llm.py:261] Added request cmpl-f42b1a561edf4c3891689a832eff93cc-0.
INFO 03-02 00:28:27 [logger.py:42] Received request cmpl-a5419a007c64473487bcb51e6c27c17a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:27 [async_llm.py:261] Added request cmpl-a5419a007c64473487bcb51e6c27c17a-0.
INFO 03-02 00:28:28 [logger.py:42] Received request cmpl-b4350360b68847a0a5766d69e9639fc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:28 [async_llm.py:261] Added request cmpl-b4350360b68847a0a5766d69e9639fc5-0.
INFO 03-02 00:28:29 [logger.py:42] Received request cmpl-517c870f037f44fe89a3036dc27490e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:29 [async_llm.py:261] Added request cmpl-517c870f037f44fe89a3036dc27490e3-0.
INFO 03-02 00:28:30 [logger.py:42] Received request cmpl-05bd5dfe09b042c5ac34cb0675d41116-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:30 [async_llm.py:261] Added request cmpl-05bd5dfe09b042c5ac34cb0675d41116-0.
INFO 03-02 00:28:31 [logger.py:42] Received request cmpl-1be0d3f1d27f4b25b08a08662cc84bfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:31 [async_llm.py:261] Added request cmpl-1be0d3f1d27f4b25b08a08662cc84bfe-0.
INFO 03-02 00:28:32 [logger.py:42] Received request cmpl-8961185f438047e5a0dcf600fbfa0944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:32 [async_llm.py:261] Added request cmpl-8961185f438047e5a0dcf600fbfa0944-0.
INFO 03-02 00:28:34 [logger.py:42] Received request cmpl-128a64e982754c1aa61ad48e320ac353-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:34 [async_llm.py:261] Added request cmpl-128a64e982754c1aa61ad48e320ac353-0.
INFO 03-02 00:28:35 [logger.py:42] Received request cmpl-eeffc2582862413c9f997967d2aa34e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:35 [async_llm.py:261] Added request cmpl-eeffc2582862413c9f997967d2aa34e5-0.
INFO 03-02 00:28:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:36 [logger.py:42] Received request cmpl-fbb7ab06c987465e89aef962289bd934-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:36 [async_llm.py:261] Added request cmpl-fbb7ab06c987465e89aef962289bd934-0.
INFO 03-02 00:28:37 [logger.py:42] Received request cmpl-d44dcb8de5564a00a0e12a72efca07c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:37 [async_llm.py:261] Added request cmpl-d44dcb8de5564a00a0e12a72efca07c5-0.
INFO 03-02 00:28:38 [logger.py:42] Received request cmpl-a927da99020e4666a91f83aa609d0196-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:38 [async_llm.py:261] Added request cmpl-a927da99020e4666a91f83aa609d0196-0.
INFO 03-02 00:28:39 [logger.py:42] Received request cmpl-9b1ed72f7a1948d5b2374fd7cb613e94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:39 [async_llm.py:261] Added request cmpl-9b1ed72f7a1948d5b2374fd7cb613e94-0.
INFO 03-02 00:28:40 [logger.py:42] Received request cmpl-a48c9075beb94e198154ac1d43004890-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:40 [async_llm.py:261] Added request cmpl-a48c9075beb94e198154ac1d43004890-0.
INFO 03-02 00:28:41 [logger.py:42] Received request cmpl-059184c54b954249bec7a0aa23bbd935-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:41 [async_llm.py:261] Added request cmpl-059184c54b954249bec7a0aa23bbd935-0.
INFO 03-02 00:28:42 [logger.py:42] Received request cmpl-739950d221034216abca13d52b82cb68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:42 [async_llm.py:261] Added request cmpl-739950d221034216abca13d52b82cb68-0.
INFO 03-02 00:28:43 [logger.py:42] Received request cmpl-69e2247c9eb64a42a6432f15411830c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:43 [async_llm.py:261] Added request cmpl-69e2247c9eb64a42a6432f15411830c8-0.
INFO 03-02 00:28:44 [logger.py:42] Received request cmpl-50f0c580d4a84aad91409c890525c4ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:44 [async_llm.py:261] Added request cmpl-50f0c580d4a84aad91409c890525c4ca-0.
INFO 03-02 00:28:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:46 [logger.py:42] Received request cmpl-eb0767cb9a054f848d856a9e359055ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:46 [async_llm.py:261] Added request cmpl-eb0767cb9a054f848d856a9e359055ae-0.
INFO 03-02 00:28:47 [logger.py:42] Received request cmpl-9606d50769a640278a56694604df340d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:47 [async_llm.py:261] Added request cmpl-9606d50769a640278a56694604df340d-0.
INFO 03-02 00:28:48 [logger.py:42] Received request cmpl-e4cd2f9d81f44537814927a5f3f8bc47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:48 [async_llm.py:261] Added request cmpl-e4cd2f9d81f44537814927a5f3f8bc47-0.
INFO 03-02 00:28:49 [logger.py:42] Received request cmpl-8f0ef7fb338b4c0ab315dcf466ea2065-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:49 [async_llm.py:261] Added request cmpl-8f0ef7fb338b4c0ab315dcf466ea2065-0.
INFO 03-02 00:28:50 [logger.py:42] Received request cmpl-8d7eea4add434a138a04e93b67afb8ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:50 [async_llm.py:261] Added request cmpl-8d7eea4add434a138a04e93b67afb8ae-0.
INFO 03-02 00:28:51 [logger.py:42] Received request cmpl-4d62075012af49e08f4b249e17f0e921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:51 [async_llm.py:261] Added request cmpl-4d62075012af49e08f4b249e17f0e921-0.
INFO 03-02 00:28:52 [logger.py:42] Received request cmpl-68a247e798a34392a5c12483f021574b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:52 [async_llm.py:261] Added request cmpl-68a247e798a34392a5c12483f021574b-0.
INFO 03-02 00:28:53 [logger.py:42] Received request cmpl-1a202c29d6f64192a176a527946afff1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:53 [async_llm.py:261] Added request cmpl-1a202c29d6f64192a176a527946afff1-0.
INFO 03-02 00:28:54 [logger.py:42] Received request cmpl-dff287b2c2b641a9b690108ccf166fbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:54 [async_llm.py:261] Added request cmpl-dff287b2c2b641a9b690108ccf166fbb-0.
INFO 03-02 00:28:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:28:55 [logger.py:42] Received request cmpl-a31740d1e9fb40e6891c1bdfe33b29ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:55 [async_llm.py:261] Added request cmpl-a31740d1e9fb40e6891c1bdfe33b29ff-0.
INFO 03-02 00:28:57 [logger.py:42] Received request cmpl-c5310e7704bc4a6ba60b31dc16c51e73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:57 [async_llm.py:261] Added request cmpl-c5310e7704bc4a6ba60b31dc16c51e73-0.
INFO 03-02 00:28:58 [logger.py:42] Received request cmpl-9d93662559854efab398f9c5f63b5232-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:58 [async_llm.py:261] Added request cmpl-9d93662559854efab398f9c5f63b5232-0.
INFO 03-02 00:28:59 [logger.py:42] Received request cmpl-4d81b7f3108842b0badc87aebf4363c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:28:59 [async_llm.py:261] Added request cmpl-4d81b7f3108842b0badc87aebf4363c7-0.
INFO 03-02 00:29:00 [logger.py:42] Received request cmpl-8428661fcbfb4cffbd4a188ae2ae4e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:00 [async_llm.py:261] Added request cmpl-8428661fcbfb4cffbd4a188ae2ae4e83-0.
INFO 03-02 00:29:01 [logger.py:42] Received request cmpl-10cbecfbd9b042e2ac3ea310bafabdb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:01 [async_llm.py:261] Added request cmpl-10cbecfbd9b042e2ac3ea310bafabdb2-0.
INFO 03-02 00:29:02 [logger.py:42] Received request cmpl-5603743604cf440b8d9742adc906c86d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:02 [async_llm.py:261] Added request cmpl-5603743604cf440b8d9742adc906c86d-0.
INFO 03-02 00:29:03 [logger.py:42] Received request cmpl-4c9c563a234a4e369e97ffeaa8361fe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:03 [async_llm.py:261] Added request cmpl-4c9c563a234a4e369e97ffeaa8361fe0-0.
INFO 03-02 00:29:04 [logger.py:42] Received request cmpl-531c0cbdeac147adadcc960359911565-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:04 [async_llm.py:261] Added request cmpl-531c0cbdeac147adadcc960359911565-0.
INFO 03-02 00:29:05 [logger.py:42] Received request cmpl-94d3963d8a35482fb498d36d4360ea48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:05 [async_llm.py:261] Added request cmpl-94d3963d8a35482fb498d36d4360ea48-0.
INFO 03-02 00:29:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:06 [logger.py:42] Received request cmpl-8d7ba566fb224fc19528a0fede143405-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:06 [async_llm.py:261] Added request cmpl-8d7ba566fb224fc19528a0fede143405-0.
INFO 03-02 00:29:07 [logger.py:42] Received request cmpl-aafc2f9104d545dcb02ac623d12385c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:07 [async_llm.py:261] Added request cmpl-aafc2f9104d545dcb02ac623d12385c1-0.
INFO 03-02 00:29:09 [logger.py:42] Received request cmpl-867774fd11f64ac9875df4ddc81993cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:09 [async_llm.py:261] Added request cmpl-867774fd11f64ac9875df4ddc81993cb-0.
INFO 03-02 00:29:10 [logger.py:42] Received request cmpl-201a560b4ceb41cb8afc7ef344ba95ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:10 [async_llm.py:261] Added request cmpl-201a560b4ceb41cb8afc7ef344ba95ea-0.
INFO 03-02 00:29:11 [logger.py:42] Received request cmpl-8a4c04c7427d43f693d28c5ca2475b9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:11 [async_llm.py:261] Added request cmpl-8a4c04c7427d43f693d28c5ca2475b9c-0.
INFO 03-02 00:29:12 [logger.py:42] Received request cmpl-f0b518e0f2b9401b8d3a9c18ee2f3e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:12 [async_llm.py:261] Added request cmpl-f0b518e0f2b9401b8d3a9c18ee2f3e83-0.
INFO 03-02 00:29:13 [logger.py:42] Received request cmpl-4219fc2525bd46c487d96cd4b13bfe43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:13 [async_llm.py:261] Added request cmpl-4219fc2525bd46c487d96cd4b13bfe43-0.
INFO 03-02 00:29:14 [logger.py:42] Received request cmpl-9130e39e991b4afd81f14e4c7df70673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:14 [async_llm.py:261] Added request cmpl-9130e39e991b4afd81f14e4c7df70673-0.
INFO 03-02 00:29:15 [logger.py:42] Received request cmpl-5b0b3abef59649e2abf4397d75aeb293-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:15 [async_llm.py:261] Added request cmpl-5b0b3abef59649e2abf4397d75aeb293-0.
INFO 03-02 00:29:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:16 [logger.py:42] Received request cmpl-1ec7cf81a4b44ae8bd2053fa14623bcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:16 [async_llm.py:261] Added request cmpl-1ec7cf81a4b44ae8bd2053fa14623bcc-0.
INFO 03-02 00:29:17 [logger.py:42] Received request cmpl-f857bedd8f284fbfb73d207759cdb31e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:17 [async_llm.py:261] Added request cmpl-f857bedd8f284fbfb73d207759cdb31e-0.
INFO 03-02 00:29:18 [logger.py:42] Received request cmpl-7aa8887774cc414781ecae7700eea0dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:18 [async_llm.py:261] Added request cmpl-7aa8887774cc414781ecae7700eea0dc-0.
INFO 03-02 00:29:20 [logger.py:42] Received request cmpl-a4cc704b1a84473ab445233b766a87f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:20 [async_llm.py:261] Added request cmpl-a4cc704b1a84473ab445233b766a87f1-0.
INFO 03-02 00:29:21 [logger.py:42] Received request cmpl-b2b1e8660d604c65b7d7129e5cef9c98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:21 [async_llm.py:261] Added request cmpl-b2b1e8660d604c65b7d7129e5cef9c98-0.
INFO 03-02 00:29:22 [logger.py:42] Received request cmpl-0f2db8886871443688b39526183586fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:22 [async_llm.py:261] Added request cmpl-0f2db8886871443688b39526183586fd-0.
INFO 03-02 00:29:23 [logger.py:42] Received request cmpl-5aa431852ee84499ab710fd6dfb0145f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:23 [async_llm.py:261] Added request cmpl-5aa431852ee84499ab710fd6dfb0145f-0.
INFO 03-02 00:29:24 [logger.py:42] Received request cmpl-1640e115fdbf428b96e17c8cb3f47c8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:24 [async_llm.py:261] Added request cmpl-1640e115fdbf428b96e17c8cb3f47c8f-0.
INFO 03-02 00:29:25 [logger.py:42] Received request cmpl-048658fbbe464a4ab9e4a120f8e476cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:25 [async_llm.py:261] Added request cmpl-048658fbbe464a4ab9e4a120f8e476cc-0.
INFO 03-02 00:29:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:26 [logger.py:42] Received request cmpl-75b7d5e8fa0142f7b4553797dc7e4b85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:26 [async_llm.py:261] Added request cmpl-75b7d5e8fa0142f7b4553797dc7e4b85-0.
INFO 03-02 00:29:27 [logger.py:42] Received request cmpl-30876c94f7374b4f87c8f6a7b7e14e76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:27 [async_llm.py:261] Added request cmpl-30876c94f7374b4f87c8f6a7b7e14e76-0.
INFO 03-02 00:29:28 [logger.py:42] Received request cmpl-bbd716efe082409291fbe77f9f91be79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:28 [async_llm.py:261] Added request cmpl-bbd716efe082409291fbe77f9f91be79-0.
INFO 03-02 00:29:29 [logger.py:42] Received request cmpl-4cb7f961e075441b94b0c6e61b6d51a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:29 [async_llm.py:261] Added request cmpl-4cb7f961e075441b94b0c6e61b6d51a9-0.
INFO 03-02 00:29:30 [logger.py:42] Received request cmpl-a0907038e6e84824bab5fcd4ebf8ce3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:30 [async_llm.py:261] Added request cmpl-a0907038e6e84824bab5fcd4ebf8ce3d-0.
INFO 03-02 00:29:32 [logger.py:42] Received request cmpl-a909b61846ef4b33a6f7b2b2922f573c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:32 [async_llm.py:261] Added request cmpl-a909b61846ef4b33a6f7b2b2922f573c-0.
INFO 03-02 00:29:33 [logger.py:42] Received request cmpl-11efc3bbeb6e4c1a8a0bf7baaf93bfae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:33 [async_llm.py:261] Added request cmpl-11efc3bbeb6e4c1a8a0bf7baaf93bfae-0.
INFO 03-02 00:29:34 [logger.py:42] Received request cmpl-3d397d6e62f6475689fabeaec263a71e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:34 [async_llm.py:261] Added request cmpl-3d397d6e62f6475689fabeaec263a71e-0.
INFO 03-02 00:29:35 [logger.py:42] Received request cmpl-cb4669832aae4f81aa7970e587df09cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:35 [async_llm.py:261] Added request cmpl-cb4669832aae4f81aa7970e587df09cc-0.
INFO 03-02 00:29:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:36 [logger.py:42] Received request cmpl-73de23591aba451cb99416c1deb6f160-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:36 [async_llm.py:261] Added request cmpl-73de23591aba451cb99416c1deb6f160-0.
INFO 03-02 00:29:37 [logger.py:42] Received request cmpl-e3882f7e41fc40a0b09c5be856e3f62d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:37 [async_llm.py:261] Added request cmpl-e3882f7e41fc40a0b09c5be856e3f62d-0.
INFO 03-02 00:29:38 [logger.py:42] Received request cmpl-f8b8db9c5b8f43d485ef4245adfca240-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:38 [async_llm.py:261] Added request cmpl-f8b8db9c5b8f43d485ef4245adfca240-0.
INFO 03-02 00:29:39 [logger.py:42] Received request cmpl-8eba897e2f6544f7b6f6cd570457a45e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:39 [async_llm.py:261] Added request cmpl-8eba897e2f6544f7b6f6cd570457a45e-0.
INFO 03-02 00:29:40 [logger.py:42] Received request cmpl-b40f6fc9b4a740d79db556961a394625-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:40 [async_llm.py:261] Added request cmpl-b40f6fc9b4a740d79db556961a394625-0.
INFO 03-02 00:29:41 [logger.py:42] Received request cmpl-e76fa1134baa4c38a8c1213bf58ae609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:41 [async_llm.py:261] Added request cmpl-e76fa1134baa4c38a8c1213bf58ae609-0.
INFO 03-02 00:29:43 [logger.py:42] Received request cmpl-cb1e260466a1444c86dba206d477aba3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:43 [async_llm.py:261] Added request cmpl-cb1e260466a1444c86dba206d477aba3-0.
INFO 03-02 00:29:44 [logger.py:42] Received request cmpl-6a5957e610ce443a9eef5b52dc30749b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:44 [async_llm.py:261] Added request cmpl-6a5957e610ce443a9eef5b52dc30749b-0.
INFO 03-02 00:29:45 [logger.py:42] Received request cmpl-50c344e3d8b141debb540787cbf663b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:45 [async_llm.py:261] Added request cmpl-50c344e3d8b141debb540787cbf663b6-0.
INFO 03-02 00:29:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:46 [logger.py:42] Received request cmpl-b4666e0b77bb4113890d00c6d212289b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:46 [async_llm.py:261] Added request cmpl-b4666e0b77bb4113890d00c6d212289b-0.
INFO 03-02 00:29:47 [logger.py:42] Received request cmpl-c7202bda65cd49908f5e631a5e799d09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:47 [async_llm.py:261] Added request cmpl-c7202bda65cd49908f5e631a5e799d09-0.
INFO 03-02 00:29:48 [logger.py:42] Received request cmpl-0390e40109a24af5b8f36035dcddf9b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:48 [async_llm.py:261] Added request cmpl-0390e40109a24af5b8f36035dcddf9b2-0.
INFO 03-02 00:29:49 [logger.py:42] Received request cmpl-7c0dff556a4b495594643d2950b7c99b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:49 [async_llm.py:261] Added request cmpl-7c0dff556a4b495594643d2950b7c99b-0.
INFO 03-02 00:29:50 [logger.py:42] Received request cmpl-8c764db5eeb94fe59e44f717724a5f17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:50 [async_llm.py:261] Added request cmpl-8c764db5eeb94fe59e44f717724a5f17-0.
INFO 03-02 00:29:51 [logger.py:42] Received request cmpl-0a786f7e0a654565ab34a4bb06de9707-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:51 [async_llm.py:261] Added request cmpl-0a786f7e0a654565ab34a4bb06de9707-0.
INFO 03-02 00:29:52 [logger.py:42] Received request cmpl-70931c82ea0c45eb8ee7b81e4ef90ff1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:52 [async_llm.py:261] Added request cmpl-70931c82ea0c45eb8ee7b81e4ef90ff1-0.
INFO 03-02 00:29:53 [logger.py:42] Received request cmpl-22c2afb6585041349cc5b557cb7e848c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:53 [async_llm.py:261] Added request cmpl-22c2afb6585041349cc5b557cb7e848c-0.
INFO 03-02 00:29:55 [logger.py:42] Received request cmpl-a0933468fbac44a1a9a487faf1982f85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:55 [async_llm.py:261] Added request cmpl-a0933468fbac44a1a9a487faf1982f85-0.
INFO 03-02 00:29:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:29:56 [logger.py:42] Received request cmpl-aa4533174b89443c9a2cf53d1ed952bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:56 [async_llm.py:261] Added request cmpl-aa4533174b89443c9a2cf53d1ed952bd-0.
INFO 03-02 00:29:57 [logger.py:42] Received request cmpl-e700db4c097e4f2c8d26ed173675f328-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:57 [async_llm.py:261] Added request cmpl-e700db4c097e4f2c8d26ed173675f328-0.
INFO 03-02 00:29:58 [logger.py:42] Received request cmpl-082bd66ff9a8443abbc75eabb174bca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:58 [async_llm.py:261] Added request cmpl-082bd66ff9a8443abbc75eabb174bca3-0.
INFO 03-02 00:29:59 [logger.py:42] Received request cmpl-769a558cf8874fd5ac12c8b18f6d8b9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:29:59 [async_llm.py:261] Added request cmpl-769a558cf8874fd5ac12c8b18f6d8b9e-0.
INFO 03-02 00:30:00 [logger.py:42] Received request cmpl-bc1298d2e8154804a92d4742da3b9c49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:00 [async_llm.py:261] Added request cmpl-bc1298d2e8154804a92d4742da3b9c49-0.
INFO 03-02 00:30:01 [logger.py:42] Received request cmpl-602a87c3535d4f909d89184212af761c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:01 [async_llm.py:261] Added request cmpl-602a87c3535d4f909d89184212af761c-0.
INFO 03-02 00:30:02 [logger.py:42] Received request cmpl-706a0484d5a648828ea96b6d0e08f1ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:02 [async_llm.py:261] Added request cmpl-706a0484d5a648828ea96b6d0e08f1ee-0.
INFO 03-02 00:30:03 [logger.py:42] Received request cmpl-04b6b50e60934301baf8c57efcdaf8ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:03 [async_llm.py:261] Added request cmpl-04b6b50e60934301baf8c57efcdaf8ac-0.
INFO 03-02 00:30:04 [logger.py:42] Received request cmpl-000ce527f16140628ad0835587183a27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:04 [async_llm.py:261] Added request cmpl-000ce527f16140628ad0835587183a27-0.
INFO 03-02 00:30:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:06 [logger.py:42] Received request cmpl-dda37345b953462a80f2a484c0391a12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:06 [async_llm.py:261] Added request cmpl-dda37345b953462a80f2a484c0391a12-0.
INFO 03-02 00:30:07 [logger.py:42] Received request cmpl-efef479f83db4a899258a1b0a46f6ff8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:07 [async_llm.py:261] Added request cmpl-efef479f83db4a899258a1b0a46f6ff8-0.
INFO 03-02 00:30:08 [logger.py:42] Received request cmpl-494c306636854f19b69956217352de07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:08 [async_llm.py:261] Added request cmpl-494c306636854f19b69956217352de07-0.
INFO 03-02 00:30:09 [logger.py:42] Received request cmpl-3663c501580449e2b8b3fceb9b4325ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:09 [async_llm.py:261] Added request cmpl-3663c501580449e2b8b3fceb9b4325ef-0.
INFO 03-02 00:30:10 [logger.py:42] Received request cmpl-e576cea1a0eb430f999a44406de4f790-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:10 [async_llm.py:261] Added request cmpl-e576cea1a0eb430f999a44406de4f790-0.
INFO 03-02 00:30:11 [logger.py:42] Received request cmpl-f2e9a880f11947c58c355413140571e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:11 [async_llm.py:261] Added request cmpl-f2e9a880f11947c58c355413140571e2-0.
INFO 03-02 00:30:12 [logger.py:42] Received request cmpl-eae0e2c681414650b006210c8c928458-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:12 [async_llm.py:261] Added request cmpl-eae0e2c681414650b006210c8c928458-0.
INFO 03-02 00:30:13 [logger.py:42] Received request cmpl-832120823a6a44f5a841fc3680a8614d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:13 [async_llm.py:261] Added request cmpl-832120823a6a44f5a841fc3680a8614d-0.
INFO 03-02 00:30:14 [logger.py:42] Received request cmpl-7eabc69915564edda6cd69f6c9f6e2b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:14 [async_llm.py:261] Added request cmpl-7eabc69915564edda6cd69f6c9f6e2b6-0.
INFO 03-02 00:30:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:15 [logger.py:42] Received request cmpl-7c6bcf072f454dd89d52533c8520e3f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:15 [async_llm.py:261] Added request cmpl-7c6bcf072f454dd89d52533c8520e3f8-0.
INFO 03-02 00:30:17 [logger.py:42] Received request cmpl-b7368c14003f497fb9a8d39ed2ed5a47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:17 [async_llm.py:261] Added request cmpl-b7368c14003f497fb9a8d39ed2ed5a47-0.
INFO 03-02 00:30:18 [logger.py:42] Received request cmpl-f836b828fa8f4cb9994c06a471e14e69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:18 [async_llm.py:261] Added request cmpl-f836b828fa8f4cb9994c06a471e14e69-0.
INFO 03-02 00:30:19 [logger.py:42] Received request cmpl-66bea99ec2ba48c8b8e16102143905c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:19 [async_llm.py:261] Added request cmpl-66bea99ec2ba48c8b8e16102143905c1-0.
INFO 03-02 00:30:20 [logger.py:42] Received request cmpl-f7bd6811403f489f9f06b3f090caa283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:20 [async_llm.py:261] Added request cmpl-f7bd6811403f489f9f06b3f090caa283-0.
INFO 03-02 00:30:21 [logger.py:42] Received request cmpl-ebb5127a6cdd440abd3b98c7248e8e0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:21 [async_llm.py:261] Added request cmpl-ebb5127a6cdd440abd3b98c7248e8e0f-0.
INFO 03-02 00:30:22 [logger.py:42] Received request cmpl-6ade93efcbbb48238907a44fd3cf207a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:22 [async_llm.py:261] Added request cmpl-6ade93efcbbb48238907a44fd3cf207a-0.
INFO 03-02 00:30:23 [logger.py:42] Received request cmpl-598346998dc74613888521d3548d2b44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:23 [async_llm.py:261] Added request cmpl-598346998dc74613888521d3548d2b44-0.
INFO 03-02 00:30:24 [logger.py:42] Received request cmpl-7683647bad41456281da8587639ab71c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:24 [async_llm.py:261] Added request cmpl-7683647bad41456281da8587639ab71c-0.
INFO 03-02 00:30:25 [logger.py:42] Received request cmpl-e1feab6e64e242818665c5afb2ac2f4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:25 [async_llm.py:261] Added request cmpl-e1feab6e64e242818665c5afb2ac2f4d-0.
INFO 03-02 00:30:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:26 [logger.py:42] Received request cmpl-799f59468c65474d97f231e1d5f852cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:26 [async_llm.py:261] Added request cmpl-799f59468c65474d97f231e1d5f852cb-0.
INFO 03-02 00:30:27 [logger.py:42] Received request cmpl-9079f1659c2d4c728c0b02427d2e38da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:27 [async_llm.py:261] Added request cmpl-9079f1659c2d4c728c0b02427d2e38da-0.
INFO 03-02 00:30:29 [logger.py:42] Received request cmpl-e6562037ad934b059ce348d48d807683-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:29 [async_llm.py:261] Added request cmpl-e6562037ad934b059ce348d48d807683-0.
INFO 03-02 00:30:30 [logger.py:42] Received request cmpl-27eb9efd80d946edba5ce745ce81aca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:30 [async_llm.py:261] Added request cmpl-27eb9efd80d946edba5ce745ce81aca6-0.
INFO 03-02 00:30:31 [logger.py:42] Received request cmpl-b91877c6f6c84329ba6dc627939afe7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:31 [async_llm.py:261] Added request cmpl-b91877c6f6c84329ba6dc627939afe7e-0.
INFO 03-02 00:30:32 [logger.py:42] Received request cmpl-2a7b8a522cf44f3a9a9ca1514f5db381-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:32 [async_llm.py:261] Added request cmpl-2a7b8a522cf44f3a9a9ca1514f5db381-0.
INFO 03-02 00:30:33 [logger.py:42] Received request cmpl-a83108b7f4184b2786546292a1e23cec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:33 [async_llm.py:261] Added request cmpl-a83108b7f4184b2786546292a1e23cec-0.
INFO 03-02 00:30:34 [logger.py:42] Received request cmpl-0d20f2aec2384c20983d7667f8aa6d1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:34 [async_llm.py:261] Added request cmpl-0d20f2aec2384c20983d7667f8aa6d1b-0.
INFO 03-02 00:30:35 [logger.py:42] Received request cmpl-0d3671731a1d4d458fcfb138123025b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:35 [async_llm.py:261] Added request cmpl-0d3671731a1d4d458fcfb138123025b7-0.
INFO 03-02 00:30:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:36 [logger.py:42] Received request cmpl-b803e7e8b72b4d72bb12df9114051574-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:36 [async_llm.py:261] Added request cmpl-b803e7e8b72b4d72bb12df9114051574-0.
INFO 03-02 00:30:37 [logger.py:42] Received request cmpl-dfd278f627454550994776eb6e0e1442-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:37 [async_llm.py:261] Added request cmpl-dfd278f627454550994776eb6e0e1442-0.
INFO 03-02 00:30:38 [logger.py:42] Received request cmpl-3617f52107ef430186f773c8ffe099d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:38 [async_llm.py:261] Added request cmpl-3617f52107ef430186f773c8ffe099d3-0.
INFO 03-02 00:30:40 [logger.py:42] Received request cmpl-d8fafeae7f4f493bbeb6b0d7301713c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:40 [async_llm.py:261] Added request cmpl-d8fafeae7f4f493bbeb6b0d7301713c2-0.
INFO 03-02 00:30:41 [logger.py:42] Received request cmpl-d2b907d331a342b9af7abb0913a6e304-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:41 [async_llm.py:261] Added request cmpl-d2b907d331a342b9af7abb0913a6e304-0.
INFO 03-02 00:30:42 [logger.py:42] Received request cmpl-015b38b8a3844ee0806a6246a1f39189-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:42 [async_llm.py:261] Added request cmpl-015b38b8a3844ee0806a6246a1f39189-0.
INFO 03-02 00:30:43 [logger.py:42] Received request cmpl-d51b61e9de154f069d588fdb3f1e5f23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:43 [async_llm.py:261] Added request cmpl-d51b61e9de154f069d588fdb3f1e5f23-0.
INFO 03-02 00:30:44 [logger.py:42] Received request cmpl-0f666addf230423a864d8686c8c94f20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:44 [async_llm.py:261] Added request cmpl-0f666addf230423a864d8686c8c94f20-0.
INFO 03-02 00:30:45 [logger.py:42] Received request cmpl-66054b61e9894349b659488c2daaae45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:45 [async_llm.py:261] Added request cmpl-66054b61e9894349b659488c2daaae45-0.
INFO 03-02 00:30:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:46 [logger.py:42] Received request cmpl-820b9dc97340410697e98a6550fcf283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:46 [async_llm.py:261] Added request cmpl-820b9dc97340410697e98a6550fcf283-0.
INFO 03-02 00:30:47 [logger.py:42] Received request cmpl-ecd03159025a49a9adad9782f2af89aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:47 [async_llm.py:261] Added request cmpl-ecd03159025a49a9adad9782f2af89aa-0.
INFO 03-02 00:30:48 [logger.py:42] Received request cmpl-db5fd31ba4744ea3aa51809364d24ca4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:48 [async_llm.py:261] Added request cmpl-db5fd31ba4744ea3aa51809364d24ca4-0.
INFO 03-02 00:30:49 [logger.py:42] Received request cmpl-72418facaa394335a3f36ee3db6beed5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:49 [async_llm.py:261] Added request cmpl-72418facaa394335a3f36ee3db6beed5-0.
INFO 03-02 00:30:50 [logger.py:42] Received request cmpl-bbe4022d569f4b2fb0ec5daba3702e7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:50 [async_llm.py:261] Added request cmpl-bbe4022d569f4b2fb0ec5daba3702e7c-0.
INFO 03-02 00:30:52 [logger.py:42] Received request cmpl-9fa802518aee49fbb0dc4d9921076822-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:52 [async_llm.py:261] Added request cmpl-9fa802518aee49fbb0dc4d9921076822-0.
INFO 03-02 00:30:53 [logger.py:42] Received request cmpl-80b4758748874986bba102445718728f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:53 [async_llm.py:261] Added request cmpl-80b4758748874986bba102445718728f-0.
INFO 03-02 00:30:54 [logger.py:42] Received request cmpl-4c72eeec3879488093dec73d47ddc1ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:54 [async_llm.py:261] Added request cmpl-4c72eeec3879488093dec73d47ddc1ac-0.
INFO 03-02 00:30:55 [logger.py:42] Received request cmpl-4f506acd57284992802521809cc03edd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:55 [async_llm.py:261] Added request cmpl-4f506acd57284992802521809cc03edd-0.
INFO 03-02 00:30:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:30:56 [logger.py:42] Received request cmpl-d9c48cc880d141c5a57638a0fec5933a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:56 [async_llm.py:261] Added request cmpl-d9c48cc880d141c5a57638a0fec5933a-0.
INFO 03-02 00:30:57 [logger.py:42] Received request cmpl-4ff4acf3a855433d80075246219d9474-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:57 [async_llm.py:261] Added request cmpl-4ff4acf3a855433d80075246219d9474-0.
INFO 03-02 00:30:58 [logger.py:42] Received request cmpl-89a693bd3e824831a02e7a99eaa0207c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:58 [async_llm.py:261] Added request cmpl-89a693bd3e824831a02e7a99eaa0207c-0.
INFO 03-02 00:30:59 [logger.py:42] Received request cmpl-2b21683f09fe4926beb83f1de02ec2f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:30:59 [async_llm.py:261] Added request cmpl-2b21683f09fe4926beb83f1de02ec2f7-0.
INFO 03-02 00:31:00 [logger.py:42] Received request cmpl-a151db4f2d5846b0b3e8fdcba4fbbd7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:00 [async_llm.py:261] Added request cmpl-a151db4f2d5846b0b3e8fdcba4fbbd7d-0.
INFO 03-02 00:31:01 [logger.py:42] Received request cmpl-58a112f11c1745a5bfda294e12df0068-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:01 [async_llm.py:261] Added request cmpl-58a112f11c1745a5bfda294e12df0068-0.
INFO 03-02 00:31:03 [logger.py:42] Received request cmpl-720f9942f92245c6beddf04ef4556f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:03 [async_llm.py:261] Added request cmpl-720f9942f92245c6beddf04ef4556f9b-0.
INFO 03-02 00:31:04 [logger.py:42] Received request cmpl-154e860e0da1418e9d1e7606e0ed5eb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:04 [async_llm.py:261] Added request cmpl-154e860e0da1418e9d1e7606e0ed5eb5-0.
INFO 03-02 00:31:05 [logger.py:42] Received request cmpl-f6238a612b9e4c14bd32fc5c9d7ade0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:05 [async_llm.py:261] Added request cmpl-f6238a612b9e4c14bd32fc5c9d7ade0b-0.
INFO 03-02 00:31:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:06 [logger.py:42] Received request cmpl-aa26180d5b40494596ef42ff87996a2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:06 [async_llm.py:261] Added request cmpl-aa26180d5b40494596ef42ff87996a2b-0.
INFO 03-02 00:31:07 [logger.py:42] Received request cmpl-88a7ff11c2e8424cbb99c496cee3647c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:07 [async_llm.py:261] Added request cmpl-88a7ff11c2e8424cbb99c496cee3647c-0.
INFO 03-02 00:31:08 [logger.py:42] Received request cmpl-c558ca23d0564c33a5413bef16a258a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:08 [async_llm.py:261] Added request cmpl-c558ca23d0564c33a5413bef16a258a1-0.
INFO 03-02 00:31:09 [logger.py:42] Received request cmpl-ff74b8ac171144f7b11fa0ef904c02ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:09 [async_llm.py:261] Added request cmpl-ff74b8ac171144f7b11fa0ef904c02ab-0.
INFO 03-02 00:31:10 [logger.py:42] Received request cmpl-21dfdf2c7c4446ec93105af022b8b95a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:10 [async_llm.py:261] Added request cmpl-21dfdf2c7c4446ec93105af022b8b95a-0.
INFO 03-02 00:31:11 [logger.py:42] Received request cmpl-4997fb82fa4346f092155b8459e6a062-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:11 [async_llm.py:261] Added request cmpl-4997fb82fa4346f092155b8459e6a062-0.
INFO 03-02 00:31:12 [logger.py:42] Received request cmpl-7aaa468fc2164461b1f20933921b7b3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:12 [async_llm.py:261] Added request cmpl-7aaa468fc2164461b1f20933921b7b3b-0.
INFO 03-02 00:31:14 [logger.py:42] Received request cmpl-646db69b3ede4460b0444dfd03e6c5e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:14 [async_llm.py:261] Added request cmpl-646db69b3ede4460b0444dfd03e6c5e5-0.
INFO 03-02 00:31:15 [logger.py:42] Received request cmpl-84f5a5d02591432fa956bb85be6cab8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:15 [async_llm.py:261] Added request cmpl-84f5a5d02591432fa956bb85be6cab8a-0.
INFO 03-02 00:31:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:16 [logger.py:42] Received request cmpl-0b0f53d392764bbaa4eec954d4793d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:16 [async_llm.py:261] Added request cmpl-0b0f53d392764bbaa4eec954d4793d2a-0.
INFO 03-02 00:31:17 [logger.py:42] Received request cmpl-ee5eaecd032f45a4852defaa59b686d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:17 [async_llm.py:261] Added request cmpl-ee5eaecd032f45a4852defaa59b686d6-0.
INFO 03-02 00:31:18 [logger.py:42] Received request cmpl-e35a2875ed814f8695a4897576bc91f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:18 [async_llm.py:261] Added request cmpl-e35a2875ed814f8695a4897576bc91f0-0.
INFO 03-02 00:31:19 [logger.py:42] Received request cmpl-ddf4bd1ae207488886e513af509807ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:19 [async_llm.py:261] Added request cmpl-ddf4bd1ae207488886e513af509807ed-0.
INFO 03-02 00:31:20 [logger.py:42] Received request cmpl-71f8bd8cb5844488894c790525367987-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:20 [async_llm.py:261] Added request cmpl-71f8bd8cb5844488894c790525367987-0.
INFO 03-02 00:31:21 [logger.py:42] Received request cmpl-19567255c332449f810846e0d9a55bbf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:21 [async_llm.py:261] Added request cmpl-19567255c332449f810846e0d9a55bbf-0.
INFO 03-02 00:31:22 [logger.py:42] Received request cmpl-28c830eb0927403a9df0d203b78b98e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:22 [async_llm.py:261] Added request cmpl-28c830eb0927403a9df0d203b78b98e2-0.
INFO 03-02 00:31:23 [logger.py:42] Received request cmpl-f7e686714ce34b72a814c5770426c170-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:23 [async_llm.py:261] Added request cmpl-f7e686714ce34b72a814c5770426c170-0.
INFO 03-02 00:31:24 [logger.py:42] Received request cmpl-aab18e54dc2d4836b635ec60e1bda1ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:24 [async_llm.py:261] Added request cmpl-aab18e54dc2d4836b635ec60e1bda1ef-0.
INFO 03-02 00:31:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:26 [logger.py:42] Received request cmpl-07dc1b3a239a431eb1f90e59bac9e6de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:26 [async_llm.py:261] Added request cmpl-07dc1b3a239a431eb1f90e59bac9e6de-0.
INFO 03-02 00:31:27 [logger.py:42] Received request cmpl-fcd2faa56a684e568cd1aac8c77f0a9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:27 [async_llm.py:261] Added request cmpl-fcd2faa56a684e568cd1aac8c77f0a9d-0.
INFO 03-02 00:31:28 [logger.py:42] Received request cmpl-4aef6d032fa3458a8539bed269a1aee6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:28 [async_llm.py:261] Added request cmpl-4aef6d032fa3458a8539bed269a1aee6-0.
INFO 03-02 00:31:29 [logger.py:42] Received request cmpl-b6eed1542eb842d584f37fd91b3b99de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:29 [async_llm.py:261] Added request cmpl-b6eed1542eb842d584f37fd91b3b99de-0.
INFO 03-02 00:31:30 [logger.py:42] Received request cmpl-b0a731b8e8404deab3b3dc0ae9a2d415-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:30 [async_llm.py:261] Added request cmpl-b0a731b8e8404deab3b3dc0ae9a2d415-0.
INFO 03-02 00:31:31 [logger.py:42] Received request cmpl-7c8fef12b12d40d5aa4396fe59f30933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:31 [async_llm.py:261] Added request cmpl-7c8fef12b12d40d5aa4396fe59f30933-0.
INFO 03-02 00:31:32 [logger.py:42] Received request cmpl-4ea50939887e4630809dd5cb4afd48f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:32 [async_llm.py:261] Added request cmpl-4ea50939887e4630809dd5cb4afd48f6-0.
INFO 03-02 00:31:33 [logger.py:42] Received request cmpl-ca4478b6e6d34562b61ae116f602ce83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:33 [async_llm.py:261] Added request cmpl-ca4478b6e6d34562b61ae116f602ce83-0.
INFO 03-02 00:31:34 [logger.py:42] Received request cmpl-8210391a52c1439c9dc416e0054ad5fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:34 [async_llm.py:261] Added request cmpl-8210391a52c1439c9dc416e0054ad5fd-0.
INFO 03-02 00:31:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:35 [logger.py:42] Received request cmpl-26d4c2f1bc994a31afb9ac5e1bd21dec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:35 [async_llm.py:261] Added request cmpl-26d4c2f1bc994a31afb9ac5e1bd21dec-0.
INFO 03-02 00:31:37 [logger.py:42] Received request cmpl-5534bc514e64477db8fde9e6b5486ac8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:37 [async_llm.py:261] Added request cmpl-5534bc514e64477db8fde9e6b5486ac8-0.
INFO 03-02 00:31:38 [logger.py:42] Received request cmpl-754b41e87c634c10a1aa47d8762dd95f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:38 [async_llm.py:261] Added request cmpl-754b41e87c634c10a1aa47d8762dd95f-0.
INFO 03-02 00:31:39 [logger.py:42] Received request cmpl-8eeceb5ac400470eaaeb4faa194090f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:39 [async_llm.py:261] Added request cmpl-8eeceb5ac400470eaaeb4faa194090f2-0.
INFO 03-02 00:31:40 [logger.py:42] Received request cmpl-d2f496f58c654cbcb089211fff7046b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:40 [async_llm.py:261] Added request cmpl-d2f496f58c654cbcb089211fff7046b9-0.
INFO 03-02 00:31:41 [logger.py:42] Received request cmpl-206f60a59c464fda9fa76682f3a1c1e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:41 [async_llm.py:261] Added request cmpl-206f60a59c464fda9fa76682f3a1c1e9-0.
INFO 03-02 00:31:42 [logger.py:42] Received request cmpl-062c1874cefd49b8909d4112c2226e2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:42 [async_llm.py:261] Added request cmpl-062c1874cefd49b8909d4112c2226e2e-0.
INFO 03-02 00:31:43 [logger.py:42] Received request cmpl-bf3a192faeba4be1bab5edd2b3741a05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:43 [async_llm.py:261] Added request cmpl-bf3a192faeba4be1bab5edd2b3741a05-0.
INFO 03-02 00:31:44 [logger.py:42] Received request cmpl-911ec26a45b44c3a9b16d693f8b6770d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:44 [async_llm.py:261] Added request cmpl-911ec26a45b44c3a9b16d693f8b6770d-0.
INFO 03-02 00:31:45 [logger.py:42] Received request cmpl-c31d18ea88dd487cb622b846a64aa140-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:45 [async_llm.py:261] Added request cmpl-c31d18ea88dd487cb622b846a64aa140-0.
INFO 03-02 00:31:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:46 [logger.py:42] Received request cmpl-3180c3f0c7364f09845532602441a251-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:46 [async_llm.py:261] Added request cmpl-3180c3f0c7364f09845532602441a251-0.
INFO 03-02 00:31:47 [logger.py:42] Received request cmpl-ed5542ae282a48bd89da5e8c5d0456c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:47 [async_llm.py:261] Added request cmpl-ed5542ae282a48bd89da5e8c5d0456c3-0.
INFO 03-02 00:31:49 [logger.py:42] Received request cmpl-e2b34ef8b38a4e19bfb75b116c84fa99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:49 [async_llm.py:261] Added request cmpl-e2b34ef8b38a4e19bfb75b116c84fa99-0.
INFO 03-02 00:31:50 [logger.py:42] Received request cmpl-f61d4ad701ac42109c18f6b18f0b7102-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:50 [async_llm.py:261] Added request cmpl-f61d4ad701ac42109c18f6b18f0b7102-0.
INFO 03-02 00:31:51 [logger.py:42] Received request cmpl-90a8ba4159924bb1a8faa0afd4a2fed5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:51 [async_llm.py:261] Added request cmpl-90a8ba4159924bb1a8faa0afd4a2fed5-0.
INFO 03-02 00:31:52 [logger.py:42] Received request cmpl-93f22a350b3644638cc9cfa26163e893-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:52 [async_llm.py:261] Added request cmpl-93f22a350b3644638cc9cfa26163e893-0.
INFO 03-02 00:31:53 [logger.py:42] Received request cmpl-d5a52f36074a4117b2bb5e177522473f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:53 [async_llm.py:261] Added request cmpl-d5a52f36074a4117b2bb5e177522473f-0.
INFO 03-02 00:31:54 [logger.py:42] Received request cmpl-83a7aeb651d04021abbcea7d19cd447c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:54 [async_llm.py:261] Added request cmpl-83a7aeb651d04021abbcea7d19cd447c-0.
INFO 03-02 00:31:55 [logger.py:42] Received request cmpl-b05caae1aa4245d58cbcec4149936a68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:55 [async_llm.py:261] Added request cmpl-b05caae1aa4245d58cbcec4149936a68-0.
INFO 03-02 00:31:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:31:56 [logger.py:42] Received request cmpl-ff5214a32f0d4c8eb44e9261e6bf989a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:56 [async_llm.py:261] Added request cmpl-ff5214a32f0d4c8eb44e9261e6bf989a-0.
INFO 03-02 00:31:57 [logger.py:42] Received request cmpl-25021c35c9f64892af841b21f5d6709a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:57 [async_llm.py:261] Added request cmpl-25021c35c9f64892af841b21f5d6709a-0.
INFO 03-02 00:31:58 [logger.py:42] Received request cmpl-b16cce2280bc48d9a09d7e73ae83b87f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:31:58 [async_llm.py:261] Added request cmpl-b16cce2280bc48d9a09d7e73ae83b87f-0.
INFO 03-02 00:32:00 [logger.py:42] Received request cmpl-89cab9fa8ae74450b45cb561b820e3e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:00 [async_llm.py:261] Added request cmpl-89cab9fa8ae74450b45cb561b820e3e6-0.
INFO 03-02 00:32:01 [logger.py:42] Received request cmpl-5d77539790df434293fc1a00dd427bda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:01 [async_llm.py:261] Added request cmpl-5d77539790df434293fc1a00dd427bda-0.
INFO 03-02 00:32:02 [logger.py:42] Received request cmpl-50aba3176eba4e26a5118630f9802213-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:02 [async_llm.py:261] Added request cmpl-50aba3176eba4e26a5118630f9802213-0.
INFO 03-02 00:32:03 [logger.py:42] Received request cmpl-60b1859c0b4c4146ac95c7f9c0f360d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:03 [async_llm.py:261] Added request cmpl-60b1859c0b4c4146ac95c7f9c0f360d2-0.
INFO 03-02 00:32:04 [logger.py:42] Received request cmpl-655a97b55dd24d0a9834cfcc2009940a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:04 [async_llm.py:261] Added request cmpl-655a97b55dd24d0a9834cfcc2009940a-0.
INFO 03-02 00:32:05 [logger.py:42] Received request cmpl-f1093f1e7fe64d45aaaec0d5973fd53e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:05 [async_llm.py:261] Added request cmpl-f1093f1e7fe64d45aaaec0d5973fd53e-0.
INFO 03-02 00:32:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:06 [logger.py:42] Received request cmpl-eb2ed6f8af84498482377ae09132b044-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:06 [async_llm.py:261] Added request cmpl-eb2ed6f8af84498482377ae09132b044-0.
INFO 03-02 00:32:07 [logger.py:42] Received request cmpl-ef9acfa6244540a39c1e340678a52702-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:07 [async_llm.py:261] Added request cmpl-ef9acfa6244540a39c1e340678a52702-0.
INFO 03-02 00:32:08 [logger.py:42] Received request cmpl-855e57e4590647b1ba4443b2b66d1245-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:08 [async_llm.py:261] Added request cmpl-855e57e4590647b1ba4443b2b66d1245-0.
INFO 03-02 00:32:09 [logger.py:42] Received request cmpl-362dd3f9c54c4ac4ae524aa6e2385806-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:09 [async_llm.py:261] Added request cmpl-362dd3f9c54c4ac4ae524aa6e2385806-0.
INFO 03-02 00:32:10 [logger.py:42] Received request cmpl-45685b7d663e48d6a91929e76fbd905d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:10 [async_llm.py:261] Added request cmpl-45685b7d663e48d6a91929e76fbd905d-0.
INFO 03-02 00:32:12 [logger.py:42] Received request cmpl-2b0533848bff4c0897f0dd845565dd5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:12 [async_llm.py:261] Added request cmpl-2b0533848bff4c0897f0dd845565dd5f-0.
INFO 03-02 00:32:13 [logger.py:42] Received request cmpl-4c8a8cba75f3440298418398ae452ac0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:13 [async_llm.py:261] Added request cmpl-4c8a8cba75f3440298418398ae452ac0-0.
INFO 03-02 00:32:14 [logger.py:42] Received request cmpl-a7722467aab7445789df88819c81d29b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:14 [async_llm.py:261] Added request cmpl-a7722467aab7445789df88819c81d29b-0.
INFO 03-02 00:32:15 [logger.py:42] Received request cmpl-002a45930dab4427afaba6338575d4ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:15 [async_llm.py:261] Added request cmpl-002a45930dab4427afaba6338575d4ef-0.
INFO 03-02 00:32:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:16 [logger.py:42] Received request cmpl-8d3248c82cb4451599aa7e2affacb7b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:16 [async_llm.py:261] Added request cmpl-8d3248c82cb4451599aa7e2affacb7b4-0.
INFO 03-02 00:32:17 [logger.py:42] Received request cmpl-342139420eab43fcad9175838a4e5907-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:17 [async_llm.py:261] Added request cmpl-342139420eab43fcad9175838a4e5907-0.
INFO 03-02 00:32:18 [logger.py:42] Received request cmpl-b72afc49d2864eab9c9e772c2c605f31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:18 [async_llm.py:261] Added request cmpl-b72afc49d2864eab9c9e772c2c605f31-0.
INFO 03-02 00:32:19 [logger.py:42] Received request cmpl-b77257419d6a44fbb738072624ec882c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:19 [async_llm.py:261] Added request cmpl-b77257419d6a44fbb738072624ec882c-0.
INFO 03-02 00:32:20 [logger.py:42] Received request cmpl-d1b0750a74204767bb64f17263b974ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:20 [async_llm.py:261] Added request cmpl-d1b0750a74204767bb64f17263b974ad-0.
INFO 03-02 00:32:21 [logger.py:42] Received request cmpl-ae5ed8e4f1c24fb6b4ad68aa62071cf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:21 [async_llm.py:261] Added request cmpl-ae5ed8e4f1c24fb6b4ad68aa62071cf4-0.
INFO 03-02 00:32:23 [logger.py:42] Received request cmpl-e6e5427e2ef54be1bd688ed018e5655b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:23 [async_llm.py:261] Added request cmpl-e6e5427e2ef54be1bd688ed018e5655b-0.
INFO 03-02 00:32:24 [logger.py:42] Received request cmpl-d0bca9f187424d6eb0e8210bd560cf24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:24 [async_llm.py:261] Added request cmpl-d0bca9f187424d6eb0e8210bd560cf24-0.
INFO 03-02 00:32:25 [logger.py:42] Received request cmpl-9d4385380fdf4c509f6cc4b81811abb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:25 [async_llm.py:261] Added request cmpl-9d4385380fdf4c509f6cc4b81811abb5-0.
INFO 03-02 00:32:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:26 [logger.py:42] Received request cmpl-3f6881f46d9d4e8097f55639ca18d0e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:26 [async_llm.py:261] Added request cmpl-3f6881f46d9d4e8097f55639ca18d0e8-0.
INFO 03-02 00:32:27 [logger.py:42] Received request cmpl-d434e2e322a741b5acf8ced921c2195b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:27 [async_llm.py:261] Added request cmpl-d434e2e322a741b5acf8ced921c2195b-0.
INFO 03-02 00:32:28 [logger.py:42] Received request cmpl-6f1b04437edc4ae1a5cd61908e1a854b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:28 [async_llm.py:261] Added request cmpl-6f1b04437edc4ae1a5cd61908e1a854b-0.
INFO 03-02 00:32:29 [logger.py:42] Received request cmpl-5ff32aab4b084950bb9060eadc269ee9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:29 [async_llm.py:261] Added request cmpl-5ff32aab4b084950bb9060eadc269ee9-0.
INFO 03-02 00:32:30 [logger.py:42] Received request cmpl-4a31d65b8e6f4d378cb6f23f4d48f7d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:30 [async_llm.py:261] Added request cmpl-4a31d65b8e6f4d378cb6f23f4d48f7d5-0.
INFO 03-02 00:32:31 [logger.py:42] Received request cmpl-0531df17765d4989a0fbedad53ef2f0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:31 [async_llm.py:261] Added request cmpl-0531df17765d4989a0fbedad53ef2f0c-0.
INFO 03-02 00:32:32 [logger.py:42] Received request cmpl-920ad93f147749d08f0c5f4d02a3d7d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:32 [async_llm.py:261] Added request cmpl-920ad93f147749d08f0c5f4d02a3d7d0-0.
INFO 03-02 00:32:33 [logger.py:42] Received request cmpl-f8f834e9bfac46f2b816093c73a9ed54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:33 [async_llm.py:261] Added request cmpl-f8f834e9bfac46f2b816093c73a9ed54-0.
INFO 03-02 00:32:35 [logger.py:42] Received request cmpl-f5de222f1d1041b294b6c3d22494231f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:35 [async_llm.py:261] Added request cmpl-f5de222f1d1041b294b6c3d22494231f-0.
INFO 03-02 00:32:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:36 [logger.py:42] Received request cmpl-0f5912efe4f84a2695daea0481a44af7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:36 [async_llm.py:261] Added request cmpl-0f5912efe4f84a2695daea0481a44af7-0.
INFO 03-02 00:32:37 [logger.py:42] Received request cmpl-83ee84f2c7f6456b978f04cfa38b554c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:37 [async_llm.py:261] Added request cmpl-83ee84f2c7f6456b978f04cfa38b554c-0.
INFO 03-02 00:32:38 [logger.py:42] Received request cmpl-fb959d0e7e8440109a4101dda90f8f0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:38 [async_llm.py:261] Added request cmpl-fb959d0e7e8440109a4101dda90f8f0f-0.
INFO 03-02 00:32:39 [logger.py:42] Received request cmpl-2e995436305f41579634189a61344876-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:39 [async_llm.py:261] Added request cmpl-2e995436305f41579634189a61344876-0.
INFO 03-02 00:32:40 [logger.py:42] Received request cmpl-a8ffabfd483240e1b45a820ddc12581b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:40 [async_llm.py:261] Added request cmpl-a8ffabfd483240e1b45a820ddc12581b-0.
INFO 03-02 00:32:41 [logger.py:42] Received request cmpl-efdca355ab1d47fa845b45b784a4cf1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:41 [async_llm.py:261] Added request cmpl-efdca355ab1d47fa845b45b784a4cf1b-0.
INFO 03-02 00:32:42 [logger.py:42] Received request cmpl-065f360939b7483b8997c5b96196705b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:42 [async_llm.py:261] Added request cmpl-065f360939b7483b8997c5b96196705b-0.
INFO 03-02 00:32:43 [logger.py:42] Received request cmpl-86a6ad7e79904e87acdefe2e81a3421f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:43 [async_llm.py:261] Added request cmpl-86a6ad7e79904e87acdefe2e81a3421f-0.
INFO 03-02 00:32:44 [logger.py:42] Received request cmpl-a1e8a0a09ed34de1b92cf62e96cc62f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:44 [async_llm.py:261] Added request cmpl-a1e8a0a09ed34de1b92cf62e96cc62f2-0.
INFO 03-02 00:32:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:46 [logger.py:42] Received request cmpl-18a0bd1537524f958cab65ce47cce170-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:46 [async_llm.py:261] Added request cmpl-18a0bd1537524f958cab65ce47cce170-0.
INFO 03-02 00:32:47 [logger.py:42] Received request cmpl-9c32f61d0a214fafbaf392ddca223549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:47 [async_llm.py:261] Added request cmpl-9c32f61d0a214fafbaf392ddca223549-0.
INFO 03-02 00:32:48 [logger.py:42] Received request cmpl-dfc0397e738e4921af23c8878206784a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:48 [async_llm.py:261] Added request cmpl-dfc0397e738e4921af23c8878206784a-0.
INFO 03-02 00:32:49 [logger.py:42] Received request cmpl-7e2b5211065b4f8485b08ef0efd0df0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:49 [async_llm.py:261] Added request cmpl-7e2b5211065b4f8485b08ef0efd0df0e-0.
INFO 03-02 00:32:50 [logger.py:42] Received request cmpl-6bb963a5f2854fc99a860ff924809dba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:50 [async_llm.py:261] Added request cmpl-6bb963a5f2854fc99a860ff924809dba-0.
INFO 03-02 00:32:51 [logger.py:42] Received request cmpl-54d429676e5b4b089462109e889d54b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:51 [async_llm.py:261] Added request cmpl-54d429676e5b4b089462109e889d54b8-0.
INFO 03-02 00:32:52 [logger.py:42] Received request cmpl-2cf4980e79cc4f468e06660752fcfd92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:52 [async_llm.py:261] Added request cmpl-2cf4980e79cc4f468e06660752fcfd92-0.
INFO 03-02 00:32:53 [logger.py:42] Received request cmpl-5ee209408e334425912d871bcc3d41fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:53 [async_llm.py:261] Added request cmpl-5ee209408e334425912d871bcc3d41fe-0.
INFO 03-02 00:32:54 [logger.py:42] Received request cmpl-789f090d9dd94cd2a2bea9e027da21bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:54 [async_llm.py:261] Added request cmpl-789f090d9dd94cd2a2bea9e027da21bc-0.
INFO 03-02 00:32:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:32:55 [logger.py:42] Received request cmpl-30c26b0168b54f9f8344156c53ac52b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:55 [async_llm.py:261] Added request cmpl-30c26b0168b54f9f8344156c53ac52b0-0.
INFO 03-02 00:32:56 [logger.py:42] Received request cmpl-ba03e5d807684886b53238b71ecaeb59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:57 [async_llm.py:261] Added request cmpl-ba03e5d807684886b53238b71ecaeb59-0.
INFO 03-02 00:32:58 [logger.py:42] Received request cmpl-3adb0f3447604a178ca15d6d6be13426-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:58 [async_llm.py:261] Added request cmpl-3adb0f3447604a178ca15d6d6be13426-0.
INFO 03-02 00:32:59 [logger.py:42] Received request cmpl-56a671c2745d4b089baf0a8aecbdd1ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:32:59 [async_llm.py:261] Added request cmpl-56a671c2745d4b089baf0a8aecbdd1ce-0.
INFO 03-02 00:33:00 [logger.py:42] Received request cmpl-5ce4d624258f4d23b5d6f92b53ff62f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:00 [async_llm.py:261] Added request cmpl-5ce4d624258f4d23b5d6f92b53ff62f3-0.
INFO 03-02 00:33:01 [logger.py:42] Received request cmpl-5c49e8db6e21429d81f36df024e159e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:01 [async_llm.py:261] Added request cmpl-5c49e8db6e21429d81f36df024e159e8-0.
INFO 03-02 00:33:02 [logger.py:42] Received request cmpl-a6fc1b5ab21842cebc5d9d2ab5217e6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:02 [async_llm.py:261] Added request cmpl-a6fc1b5ab21842cebc5d9d2ab5217e6a-0.
INFO 03-02 00:33:03 [logger.py:42] Received request cmpl-eeddb0d9f72e418db7d9cce6f586eefd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:03 [async_llm.py:261] Added request cmpl-eeddb0d9f72e418db7d9cce6f586eefd-0.
INFO 03-02 00:33:04 [logger.py:42] Received request cmpl-54457024b37b474bab6aef442808c03a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:04 [async_llm.py:261] Added request cmpl-54457024b37b474bab6aef442808c03a-0.
INFO 03-02 00:33:05 [logger.py:42] Received request cmpl-f87dc60260a246c89e164ae317a7286d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:05 [async_llm.py:261] Added request cmpl-f87dc60260a246c89e164ae317a7286d-0.
INFO 03-02 00:33:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:06 [logger.py:42] Received request cmpl-e8f6d1b758bd419583b169785111f8fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:06 [async_llm.py:261] Added request cmpl-e8f6d1b758bd419583b169785111f8fc-0.
INFO 03-02 00:33:07 [logger.py:42] Received request cmpl-0245840e2fa14724a7d3ea055909667e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:07 [async_llm.py:261] Added request cmpl-0245840e2fa14724a7d3ea055909667e-0.
INFO 03-02 00:33:09 [logger.py:42] Received request cmpl-ab71021b3eae4ae5a4bcb85746683097-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:09 [async_llm.py:261] Added request cmpl-ab71021b3eae4ae5a4bcb85746683097-0.
INFO 03-02 00:33:10 [logger.py:42] Received request cmpl-8d584401923f4289b92d6cb52d9091c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:10 [async_llm.py:261] Added request cmpl-8d584401923f4289b92d6cb52d9091c8-0.
INFO 03-02 00:33:11 [logger.py:42] Received request cmpl-c3df1bf79e4d445ba2cd5364b66e0110-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:11 [async_llm.py:261] Added request cmpl-c3df1bf79e4d445ba2cd5364b66e0110-0.
INFO 03-02 00:33:12 [logger.py:42] Received request cmpl-63e70d25abe74af9bd52d78fc809d131-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:12 [async_llm.py:261] Added request cmpl-63e70d25abe74af9bd52d78fc809d131-0.
INFO 03-02 00:33:13 [logger.py:42] Received request cmpl-0de098eea55f4d55a9a68d4308ac3c06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:13 [async_llm.py:261] Added request cmpl-0de098eea55f4d55a9a68d4308ac3c06-0.
INFO 03-02 00:33:14 [logger.py:42] Received request cmpl-e095b8f75a8f4e7bb6d6a54ff0e74387-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:14 [async_llm.py:261] Added request cmpl-e095b8f75a8f4e7bb6d6a54ff0e74387-0.
INFO 03-02 00:33:15 [logger.py:42] Received request cmpl-b52742ad1fe64ffcb39ccbfc5a23245c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:15 [async_llm.py:261] Added request cmpl-b52742ad1fe64ffcb39ccbfc5a23245c-0.
INFO 03-02 00:33:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:16 [logger.py:42] Received request cmpl-4011d5f7f81e4736b85615803eb88a68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:16 [async_llm.py:261] Added request cmpl-4011d5f7f81e4736b85615803eb88a68-0.
INFO 03-02 00:33:17 [logger.py:42] Received request cmpl-e815cffc4d7545ff9df2e1a229dde035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:17 [async_llm.py:261] Added request cmpl-e815cffc4d7545ff9df2e1a229dde035-0.
INFO 03-02 00:33:18 [logger.py:42] Received request cmpl-cbf4129dd62042c6a6fbca7cb30a7049-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:18 [async_llm.py:261] Added request cmpl-cbf4129dd62042c6a6fbca7cb30a7049-0.
INFO 03-02 00:33:19 [logger.py:42] Received request cmpl-c3a78e9b8bc549a5b0a3dbd85841af91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:19 [async_llm.py:261] Added request cmpl-c3a78e9b8bc549a5b0a3dbd85841af91-0.
INFO 03-02 00:33:21 [logger.py:42] Received request cmpl-f689a3231fec43e0a358051dd5202c51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:21 [async_llm.py:261] Added request cmpl-f689a3231fec43e0a358051dd5202c51-0.
INFO 03-02 00:33:22 [logger.py:42] Received request cmpl-c0cf03da33aa442899db48dadb7dc402-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:22 [async_llm.py:261] Added request cmpl-c0cf03da33aa442899db48dadb7dc402-0.
INFO 03-02 00:33:23 [logger.py:42] Received request cmpl-73d2f451aab544faa3510d15366965ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:23 [async_llm.py:261] Added request cmpl-73d2f451aab544faa3510d15366965ae-0.
INFO 03-02 00:33:24 [logger.py:42] Received request cmpl-3d60c0ce047f4cd4b4612946990a62c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:24 [async_llm.py:261] Added request cmpl-3d60c0ce047f4cd4b4612946990a62c2-0.
INFO 03-02 00:33:25 [logger.py:42] Received request cmpl-124338a327664fc3af9406b3054e0c95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:25 [async_llm.py:261] Added request cmpl-124338a327664fc3af9406b3054e0c95-0.
INFO 03-02 00:33:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:26 [logger.py:42] Received request cmpl-c591113aecba4d7aa068dfd97ce97c3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:26 [async_llm.py:261] Added request cmpl-c591113aecba4d7aa068dfd97ce97c3c-0.
INFO 03-02 00:33:27 [logger.py:42] Received request cmpl-591b6a6b76494ec1989c28d00241548d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:27 [async_llm.py:261] Added request cmpl-591b6a6b76494ec1989c28d00241548d-0.
INFO 03-02 00:33:28 [logger.py:42] Received request cmpl-f10030e37c7b4e57a6e6fd88a9889c0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:28 [async_llm.py:261] Added request cmpl-f10030e37c7b4e57a6e6fd88a9889c0a-0.
INFO 03-02 00:33:29 [logger.py:42] Received request cmpl-931d56da521b4d5c81db2da997278380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:29 [async_llm.py:261] Added request cmpl-931d56da521b4d5c81db2da997278380-0.
INFO 03-02 00:33:30 [logger.py:42] Received request cmpl-dc45f70c54ee4233ad99fe7143eee5d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:30 [async_llm.py:261] Added request cmpl-dc45f70c54ee4233ad99fe7143eee5d0-0.
INFO 03-02 00:33:32 [logger.py:42] Received request cmpl-5b6f1558eaa6492fa2a5d575a412e40e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:32 [async_llm.py:261] Added request cmpl-5b6f1558eaa6492fa2a5d575a412e40e-0.
INFO 03-02 00:33:33 [logger.py:42] Received request cmpl-424ea4b48a234c199d61ed8de9f0aba8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:33 [async_llm.py:261] Added request cmpl-424ea4b48a234c199d61ed8de9f0aba8-0.
INFO 03-02 00:33:34 [logger.py:42] Received request cmpl-1daa91217bc64daa8286432ab1efea2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:34 [async_llm.py:261] Added request cmpl-1daa91217bc64daa8286432ab1efea2e-0.
INFO 03-02 00:33:35 [logger.py:42] Received request cmpl-85d97aab892c4e22ad4df98d7465d850-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:35 [async_llm.py:261] Added request cmpl-85d97aab892c4e22ad4df98d7465d850-0.
INFO 03-02 00:33:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:36 [logger.py:42] Received request cmpl-0114ad26717744a6b87e077369ecde71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:36 [async_llm.py:261] Added request cmpl-0114ad26717744a6b87e077369ecde71-0.
INFO 03-02 00:33:37 [logger.py:42] Received request cmpl-0f6848e1047943be9105b3905405f292-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:37 [async_llm.py:261] Added request cmpl-0f6848e1047943be9105b3905405f292-0.
INFO 03-02 00:33:38 [logger.py:42] Received request cmpl-ee15ae7246e84f33b88bc3202bf27646-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:38 [async_llm.py:261] Added request cmpl-ee15ae7246e84f33b88bc3202bf27646-0.
INFO 03-02 00:33:39 [logger.py:42] Received request cmpl-f0d9b7446077490bb5cab8d4431ab380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:39 [async_llm.py:261] Added request cmpl-f0d9b7446077490bb5cab8d4431ab380-0.
INFO 03-02 00:33:40 [logger.py:42] Received request cmpl-25b85e6468d449e29ee83b774be5a344-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:40 [async_llm.py:261] Added request cmpl-25b85e6468d449e29ee83b774be5a344-0.
INFO 03-02 00:33:41 [logger.py:42] Received request cmpl-91751437d6754e8ea1e37ebcaba705b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:41 [async_llm.py:261] Added request cmpl-91751437d6754e8ea1e37ebcaba705b5-0.
INFO 03-02 00:33:42 [logger.py:42] Received request cmpl-491c604c476e4a109872be13cfb4ce62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:42 [async_llm.py:261] Added request cmpl-491c604c476e4a109872be13cfb4ce62-0.
INFO 03-02 00:33:44 [logger.py:42] Received request cmpl-7a310f91cfa9433f818b42b6fde2067a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:44 [async_llm.py:261] Added request cmpl-7a310f91cfa9433f818b42b6fde2067a-0.
INFO 03-02 00:33:45 [logger.py:42] Received request cmpl-1821cc56629644639d8217033c808659-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:45 [async_llm.py:261] Added request cmpl-1821cc56629644639d8217033c808659-0.
INFO 03-02 00:33:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:46 [logger.py:42] Received request cmpl-ccd4fd831c914e0abc80cc4ffc2e9aa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:46 [async_llm.py:261] Added request cmpl-ccd4fd831c914e0abc80cc4ffc2e9aa0-0.
INFO 03-02 00:33:47 [logger.py:42] Received request cmpl-4b4f8eec8ce042b386b7c0f2dbcce539-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:47 [async_llm.py:261] Added request cmpl-4b4f8eec8ce042b386b7c0f2dbcce539-0.
INFO 03-02 00:33:48 [logger.py:42] Received request cmpl-a87c8c03ce55474ba910e94746c06f20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:48 [async_llm.py:261] Added request cmpl-a87c8c03ce55474ba910e94746c06f20-0.
INFO 03-02 00:33:49 [logger.py:42] Received request cmpl-879efd33285c4d8da48db4c4564becb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:49 [async_llm.py:261] Added request cmpl-879efd33285c4d8da48db4c4564becb6-0.
INFO 03-02 00:33:50 [logger.py:42] Received request cmpl-35800d4099f040dcae0cc558b1c4cd3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:50 [async_llm.py:261] Added request cmpl-35800d4099f040dcae0cc558b1c4cd3a-0.
INFO 03-02 00:33:51 [logger.py:42] Received request cmpl-2a1ee7169b0e426088afd6e72f197140-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:51 [async_llm.py:261] Added request cmpl-2a1ee7169b0e426088afd6e72f197140-0.
INFO 03-02 00:33:52 [logger.py:42] Received request cmpl-2f1866a3cba249ee992f715961ba6f06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:52 [async_llm.py:261] Added request cmpl-2f1866a3cba249ee992f715961ba6f06-0.
INFO 03-02 00:33:53 [logger.py:42] Received request cmpl-4cb7b20473714d6ca5e5b23a6d734b25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:53 [async_llm.py:261] Added request cmpl-4cb7b20473714d6ca5e5b23a6d734b25-0.
INFO 03-02 00:33:55 [logger.py:42] Received request cmpl-0c44cf04454847b5a0bef375df0c0d61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:55 [async_llm.py:261] Added request cmpl-0c44cf04454847b5a0bef375df0c0d61-0.
INFO 03-02 00:33:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:33:56 [logger.py:42] Received request cmpl-c1e09690b72443a8840db0157827cbca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:56 [async_llm.py:261] Added request cmpl-c1e09690b72443a8840db0157827cbca-0.
INFO 03-02 00:33:57 [logger.py:42] Received request cmpl-8761930942d6453993b3ad55710cd95c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:57 [async_llm.py:261] Added request cmpl-8761930942d6453993b3ad55710cd95c-0.
INFO 03-02 00:33:58 [logger.py:42] Received request cmpl-5b1c3763047a49e1a05958514a35b8be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:58 [async_llm.py:261] Added request cmpl-5b1c3763047a49e1a05958514a35b8be-0.
INFO 03-02 00:33:59 [logger.py:42] Received request cmpl-102f6841b06642c59768e1a0d4138b29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:33:59 [async_llm.py:261] Added request cmpl-102f6841b06642c59768e1a0d4138b29-0.
INFO 03-02 00:34:00 [logger.py:42] Received request cmpl-8904b47704bf4b2bbe3cbe8bac44abbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:00 [async_llm.py:261] Added request cmpl-8904b47704bf4b2bbe3cbe8bac44abbd-0.
INFO 03-02 00:34:01 [logger.py:42] Received request cmpl-787be4858ec941d7abce6ea268bb76f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:01 [async_llm.py:261] Added request cmpl-787be4858ec941d7abce6ea268bb76f4-0.
INFO 03-02 00:34:02 [logger.py:42] Received request cmpl-4a2a3012911b45cfb437c1dd52c1e2f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:02 [async_llm.py:261] Added request cmpl-4a2a3012911b45cfb437c1dd52c1e2f3-0.
INFO 03-02 00:34:03 [logger.py:42] Received request cmpl-cff76573e0b04214b0244b29438f9369-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:03 [async_llm.py:261] Added request cmpl-cff76573e0b04214b0244b29438f9369-0.
INFO 03-02 00:34:04 [logger.py:42] Received request cmpl-5a75f3a8c4804ca08c0c610b7d7f9104-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:04 [async_llm.py:261] Added request cmpl-5a75f3a8c4804ca08c0c610b7d7f9104-0.
INFO 03-02 00:34:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:06 [logger.py:42] Received request cmpl-592b3927d7ba4dcdb6d52e536038f5e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:06 [async_llm.py:261] Added request cmpl-592b3927d7ba4dcdb6d52e536038f5e1-0.
INFO 03-02 00:34:07 [logger.py:42] Received request cmpl-f611afbcd588415d8d8ef58b2e62f19b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:07 [async_llm.py:261] Added request cmpl-f611afbcd588415d8d8ef58b2e62f19b-0.
INFO 03-02 00:34:08 [logger.py:42] Received request cmpl-e1069c195cca4e0d891dfafdbf9c3380-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:08 [async_llm.py:261] Added request cmpl-e1069c195cca4e0d891dfafdbf9c3380-0.
INFO 03-02 00:34:09 [logger.py:42] Received request cmpl-68f2e20e204548a6bde9a32ecfb65453-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:09 [async_llm.py:261] Added request cmpl-68f2e20e204548a6bde9a32ecfb65453-0.
INFO 03-02 00:34:10 [logger.py:42] Received request cmpl-67b02161c8c349d9a965696d1ce87dbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:10 [async_llm.py:261] Added request cmpl-67b02161c8c349d9a965696d1ce87dbd-0.
INFO 03-02 00:34:11 [logger.py:42] Received request cmpl-9101dd80787b4a74abd1ed30312b17f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:11 [async_llm.py:261] Added request cmpl-9101dd80787b4a74abd1ed30312b17f8-0.
INFO 03-02 00:34:12 [logger.py:42] Received request cmpl-9b0049b051d34aebbef72607bccfda80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:12 [async_llm.py:261] Added request cmpl-9b0049b051d34aebbef72607bccfda80-0.
INFO 03-02 00:34:13 [logger.py:42] Received request cmpl-c59be4bd96b64a6f87598e1ced20ac12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:13 [async_llm.py:261] Added request cmpl-c59be4bd96b64a6f87598e1ced20ac12-0.
INFO 03-02 00:34:14 [logger.py:42] Received request cmpl-d9457c30938248858041c1f08268c713-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:14 [async_llm.py:261] Added request cmpl-d9457c30938248858041c1f08268c713-0.
INFO 03-02 00:34:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:15 [logger.py:42] Received request cmpl-862a4bcfaa454982b6f15b87461cb7be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:15 [async_llm.py:261] Added request cmpl-862a4bcfaa454982b6f15b87461cb7be-0.
INFO 03-02 00:34:17 [logger.py:42] Received request cmpl-6bf6546888be45b39970d35b777ae6d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:17 [async_llm.py:261] Added request cmpl-6bf6546888be45b39970d35b777ae6d6-0.
INFO 03-02 00:34:18 [logger.py:42] Received request cmpl-fa23c9412fea44639876498a6c3759be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:18 [async_llm.py:261] Added request cmpl-fa23c9412fea44639876498a6c3759be-0.
INFO 03-02 00:34:19 [logger.py:42] Received request cmpl-7d49b3a800954e1596a539aa45fc5866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:19 [async_llm.py:261] Added request cmpl-7d49b3a800954e1596a539aa45fc5866-0.
INFO 03-02 00:34:20 [logger.py:42] Received request cmpl-a15964c648614036acb9ec68a3ab9820-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:20 [async_llm.py:261] Added request cmpl-a15964c648614036acb9ec68a3ab9820-0.
INFO 03-02 00:34:21 [logger.py:42] Received request cmpl-a9cc81d81e174f91b2539034fe166e7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:21 [async_llm.py:261] Added request cmpl-a9cc81d81e174f91b2539034fe166e7a-0.
INFO 03-02 00:34:22 [logger.py:42] Received request cmpl-59521246c3714cb3abb581c5ac18c591-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:22 [async_llm.py:261] Added request cmpl-59521246c3714cb3abb581c5ac18c591-0.
INFO 03-02 00:34:23 [logger.py:42] Received request cmpl-51625c73c37a42d8920e5bfccec71016-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:23 [async_llm.py:261] Added request cmpl-51625c73c37a42d8920e5bfccec71016-0.
INFO 03-02 00:34:24 [logger.py:42] Received request cmpl-59dd942a1f8541569b319311360e08ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:24 [async_llm.py:261] Added request cmpl-59dd942a1f8541569b319311360e08ce-0.
INFO 03-02 00:34:25 [logger.py:42] Received request cmpl-a01437f2f28e4a3584b3f1a06113624b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:25 [async_llm.py:261] Added request cmpl-a01437f2f28e4a3584b3f1a06113624b-0.
INFO 03-02 00:34:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:26 [logger.py:42] Received request cmpl-393234afde6749e78e742df993d9dcb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:26 [async_llm.py:261] Added request cmpl-393234afde6749e78e742df993d9dcb1-0.
INFO 03-02 00:34:27 [logger.py:42] Received request cmpl-21c37f3b88244a348e7cd07c33f3c7c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:27 [async_llm.py:261] Added request cmpl-21c37f3b88244a348e7cd07c33f3c7c7-0.
INFO 03-02 00:34:29 [logger.py:42] Received request cmpl-d15fa24211a446eabaa6c121c757a7ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:29 [async_llm.py:261] Added request cmpl-d15fa24211a446eabaa6c121c757a7ca-0.
INFO 03-02 00:34:30 [logger.py:42] Received request cmpl-a485f998ac494ab38b873f418471896c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:30 [async_llm.py:261] Added request cmpl-a485f998ac494ab38b873f418471896c-0.
INFO 03-02 00:34:31 [logger.py:42] Received request cmpl-4acd2003ce2b4a128dcb5272cc4e13a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:31 [async_llm.py:261] Added request cmpl-4acd2003ce2b4a128dcb5272cc4e13a5-0.
INFO 03-02 00:34:32 [logger.py:42] Received request cmpl-279e54f801bd44d58666eec6cafd87c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:32 [async_llm.py:261] Added request cmpl-279e54f801bd44d58666eec6cafd87c2-0.
INFO 03-02 00:34:33 [logger.py:42] Received request cmpl-ff733ed31374402e86c5bfc121b950c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:33 [async_llm.py:261] Added request cmpl-ff733ed31374402e86c5bfc121b950c8-0.
INFO 03-02 00:34:34 [logger.py:42] Received request cmpl-7efa28cde34f422ba470b100a590a8cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:34 [async_llm.py:261] Added request cmpl-7efa28cde34f422ba470b100a590a8cb-0.
INFO 03-02 00:34:35 [logger.py:42] Received request cmpl-48896099591f48b9804c30a432bc2c75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:35 [async_llm.py:261] Added request cmpl-48896099591f48b9804c30a432bc2c75-0.
INFO 03-02 00:34:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:36 [logger.py:42] Received request cmpl-0eebd752ad2b4ac9be5cffebbc69c8f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:36 [async_llm.py:261] Added request cmpl-0eebd752ad2b4ac9be5cffebbc69c8f5-0.
INFO 03-02 00:34:37 [logger.py:42] Received request cmpl-2955399679514fe8866b3ff0ec0d1cad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:37 [async_llm.py:261] Added request cmpl-2955399679514fe8866b3ff0ec0d1cad-0.
INFO 03-02 00:34:38 [logger.py:42] Received request cmpl-629e7b9917b845f19fbe5f7788336ff9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:38 [async_llm.py:261] Added request cmpl-629e7b9917b845f19fbe5f7788336ff9-0.
INFO 03-02 00:34:39 [logger.py:42] Received request cmpl-09791268a6684a77ad9039f1c3a25d30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:39 [async_llm.py:261] Added request cmpl-09791268a6684a77ad9039f1c3a25d30-0.
INFO 03-02 00:34:41 [logger.py:42] Received request cmpl-9b0739c2f4c343b1b5e51cf39a3f6e4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:41 [async_llm.py:261] Added request cmpl-9b0739c2f4c343b1b5e51cf39a3f6e4e-0.
INFO 03-02 00:34:42 [logger.py:42] Received request cmpl-4ab823e725284157b523dff76d9e5d87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:42 [async_llm.py:261] Added request cmpl-4ab823e725284157b523dff76d9e5d87-0.
INFO 03-02 00:34:43 [logger.py:42] Received request cmpl-227c2ab81d6e42ce999f7b1885b408fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:43 [async_llm.py:261] Added request cmpl-227c2ab81d6e42ce999f7b1885b408fc-0.
INFO 03-02 00:34:44 [logger.py:42] Received request cmpl-4c8e26f99bc8478ab37d06d835950ebf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:44 [async_llm.py:261] Added request cmpl-4c8e26f99bc8478ab37d06d835950ebf-0.
INFO 03-02 00:34:45 [logger.py:42] Received request cmpl-319f5080733742fdab48813f37837354-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:45 [async_llm.py:261] Added request cmpl-319f5080733742fdab48813f37837354-0.
INFO 03-02 00:34:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:46 [logger.py:42] Received request cmpl-814e17ea852b4e7e9642e0a919053309-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:46 [async_llm.py:261] Added request cmpl-814e17ea852b4e7e9642e0a919053309-0.
INFO 03-02 00:34:47 [logger.py:42] Received request cmpl-17c548f3d0cb440f9c0594b52788eb6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:47 [async_llm.py:261] Added request cmpl-17c548f3d0cb440f9c0594b52788eb6f-0.
INFO 03-02 00:34:48 [logger.py:42] Received request cmpl-e24c955bc3414a0c8e47358419e7ef2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:48 [async_llm.py:261] Added request cmpl-e24c955bc3414a0c8e47358419e7ef2f-0.
INFO 03-02 00:34:49 [logger.py:42] Received request cmpl-0b780081fd9d48fb8688861ab8a33d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:49 [async_llm.py:261] Added request cmpl-0b780081fd9d48fb8688861ab8a33d16-0.
INFO 03-02 00:34:50 [logger.py:42] Received request cmpl-5e8bab5977364a81828374d133b7d1dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:50 [async_llm.py:261] Added request cmpl-5e8bab5977364a81828374d133b7d1dd-0.
INFO 03-02 00:34:52 [logger.py:42] Received request cmpl-349cee67044d4bd3a1446747bdf7f5f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:52 [async_llm.py:261] Added request cmpl-349cee67044d4bd3a1446747bdf7f5f1-0.
INFO 03-02 00:34:53 [logger.py:42] Received request cmpl-56ef501921bf4670b478e0919c6d7694-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:53 [async_llm.py:261] Added request cmpl-56ef501921bf4670b478e0919c6d7694-0.
INFO 03-02 00:34:54 [logger.py:42] Received request cmpl-5e98de0b051e48dc91b5fd64123e4736-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:54 [async_llm.py:261] Added request cmpl-5e98de0b051e48dc91b5fd64123e4736-0.
INFO 03-02 00:34:55 [logger.py:42] Received request cmpl-b4f75925f7824ed094d166747ea28168-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:55 [async_llm.py:261] Added request cmpl-b4f75925f7824ed094d166747ea28168-0.
INFO 03-02 00:34:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:34:56 [logger.py:42] Received request cmpl-c19a2aa007164913bcaa48924a361b78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:56 [async_llm.py:261] Added request cmpl-c19a2aa007164913bcaa48924a361b78-0.
INFO 03-02 00:34:57 [logger.py:42] Received request cmpl-6bd25ca705e14446aa4a6f72e29aec69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:57 [async_llm.py:261] Added request cmpl-6bd25ca705e14446aa4a6f72e29aec69-0.
INFO 03-02 00:34:58 [logger.py:42] Received request cmpl-906ee5282b3b40aaa4403b607d12f896-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:58 [async_llm.py:261] Added request cmpl-906ee5282b3b40aaa4403b607d12f896-0.
INFO 03-02 00:34:59 [logger.py:42] Received request cmpl-f4827cf9946940979cdd9576f3b11a0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:34:59 [async_llm.py:261] Added request cmpl-f4827cf9946940979cdd9576f3b11a0b-0.
INFO 03-02 00:35:00 [logger.py:42] Received request cmpl-d2bdae1c63ae4cc39327749dbb53f7ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:00 [async_llm.py:261] Added request cmpl-d2bdae1c63ae4cc39327749dbb53f7ab-0.
INFO 03-02 00:35:01 [logger.py:42] Received request cmpl-fdd4ccd990794af3b3bf7717bcfd69ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:01 [async_llm.py:261] Added request cmpl-fdd4ccd990794af3b3bf7717bcfd69ae-0.
INFO 03-02 00:35:02 [logger.py:42] Received request cmpl-f5e649380e244d1681ce81fbba90e93c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:02 [async_llm.py:261] Added request cmpl-f5e649380e244d1681ce81fbba90e93c-0.
INFO 03-02 00:35:04 [logger.py:42] Received request cmpl-2f7b46fe7fae42b5af9c4292243c4957-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:04 [async_llm.py:261] Added request cmpl-2f7b46fe7fae42b5af9c4292243c4957-0.
INFO 03-02 00:35:05 [logger.py:42] Received request cmpl-f6950667a3a54990a607b0c03c4c3879-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:05 [async_llm.py:261] Added request cmpl-f6950667a3a54990a607b0c03c4c3879-0.
INFO 03-02 00:35:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:06 [logger.py:42] Received request cmpl-2a0fce4ffd284e169b2e82e4a4f50034-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:06 [async_llm.py:261] Added request cmpl-2a0fce4ffd284e169b2e82e4a4f50034-0.
INFO 03-02 00:35:07 [logger.py:42] Received request cmpl-7fc4cec32ac347569cac450f91260dbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:07 [async_llm.py:261] Added request cmpl-7fc4cec32ac347569cac450f91260dbb-0.
INFO 03-02 00:35:08 [logger.py:42] Received request cmpl-ebff2a1637a64c288a3c35b9cd9dcb90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:08 [async_llm.py:261] Added request cmpl-ebff2a1637a64c288a3c35b9cd9dcb90-0.
INFO 03-02 00:35:09 [logger.py:42] Received request cmpl-047d8026d5e042649aef962f4bf74f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:09 [async_llm.py:261] Added request cmpl-047d8026d5e042649aef962f4bf74f9b-0.
INFO 03-02 00:35:10 [logger.py:42] Received request cmpl-2712d82798dc43b2a2d1042912d893d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:10 [async_llm.py:261] Added request cmpl-2712d82798dc43b2a2d1042912d893d7-0.
INFO 03-02 00:35:11 [logger.py:42] Received request cmpl-54a54f7fe7ed486291815a0c797ff8d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:11 [async_llm.py:261] Added request cmpl-54a54f7fe7ed486291815a0c797ff8d6-0.
INFO 03-02 00:35:12 [logger.py:42] Received request cmpl-b99936aa9bc64cbcbf5b1aea01c9bae5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:12 [async_llm.py:261] Added request cmpl-b99936aa9bc64cbcbf5b1aea01c9bae5-0.
INFO 03-02 00:35:13 [logger.py:42] Received request cmpl-5515da7e735e48c7b718309a191b8a9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:13 [async_llm.py:261] Added request cmpl-5515da7e735e48c7b718309a191b8a9b-0.
INFO 03-02 00:35:15 [logger.py:42] Received request cmpl-1a69552b2df84f0d8fc651672fa68278-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:15 [async_llm.py:261] Added request cmpl-1a69552b2df84f0d8fc651672fa68278-0.
INFO 03-02 00:35:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:16 [logger.py:42] Received request cmpl-f2f21810d9ce409c91846e57387a0c9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:16 [async_llm.py:261] Added request cmpl-f2f21810d9ce409c91846e57387a0c9e-0.
INFO 03-02 00:35:17 [logger.py:42] Received request cmpl-8f235042b46f4239ada4b49eb43f748b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:17 [async_llm.py:261] Added request cmpl-8f235042b46f4239ada4b49eb43f748b-0.
INFO 03-02 00:35:18 [logger.py:42] Received request cmpl-315c50a22a0c42569c92d2abc8a0203b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:18 [async_llm.py:261] Added request cmpl-315c50a22a0c42569c92d2abc8a0203b-0.
INFO 03-02 00:35:19 [logger.py:42] Received request cmpl-0b592fb4962848f1899b12d3469be6fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:19 [async_llm.py:261] Added request cmpl-0b592fb4962848f1899b12d3469be6fa-0.
INFO 03-02 00:35:20 [logger.py:42] Received request cmpl-15a0dacfcde445ce934de7b4118920bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:20 [async_llm.py:261] Added request cmpl-15a0dacfcde445ce934de7b4118920bc-0.
INFO 03-02 00:35:21 [logger.py:42] Received request cmpl-9533086711db42e6b5c162d59e9215a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:21 [async_llm.py:261] Added request cmpl-9533086711db42e6b5c162d59e9215a6-0.
INFO 03-02 00:35:22 [logger.py:42] Received request cmpl-eb0516c2037a4e4ea8d22827c84a0652-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:22 [async_llm.py:261] Added request cmpl-eb0516c2037a4e4ea8d22827c84a0652-0.
INFO 03-02 00:35:23 [logger.py:42] Received request cmpl-e230507e960c4c278c30c37bf0795881-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:23 [async_llm.py:261] Added request cmpl-e230507e960c4c278c30c37bf0795881-0.
INFO 03-02 00:35:24 [logger.py:42] Received request cmpl-975d3e54dd6a4a43a17093a9d7b4caab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:24 [async_llm.py:261] Added request cmpl-975d3e54dd6a4a43a17093a9d7b4caab-0.
INFO 03-02 00:35:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:25 [logger.py:42] Received request cmpl-582c8d159afd4ef49a3052519200c842-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:25 [async_llm.py:261] Added request cmpl-582c8d159afd4ef49a3052519200c842-0.
INFO 03-02 00:35:27 [logger.py:42] Received request cmpl-acf761b032f04f89a4df30faced4525b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:27 [async_llm.py:261] Added request cmpl-acf761b032f04f89a4df30faced4525b-0.
INFO 03-02 00:35:28 [logger.py:42] Received request cmpl-38b4ab48cb984eb4bc3b52be79ee79cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:28 [async_llm.py:261] Added request cmpl-38b4ab48cb984eb4bc3b52be79ee79cd-0.
INFO 03-02 00:35:29 [logger.py:42] Received request cmpl-72965eebe2d947788cbdb1f3b7aef8f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:29 [async_llm.py:261] Added request cmpl-72965eebe2d947788cbdb1f3b7aef8f8-0.
INFO 03-02 00:35:30 [logger.py:42] Received request cmpl-5ce3aad7506a455aacba0637b026e7eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:30 [async_llm.py:261] Added request cmpl-5ce3aad7506a455aacba0637b026e7eb-0.
INFO 03-02 00:35:31 [logger.py:42] Received request cmpl-a648fb37d80a4904acfe4b9525ce0117-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:31 [async_llm.py:261] Added request cmpl-a648fb37d80a4904acfe4b9525ce0117-0.
INFO 03-02 00:35:32 [logger.py:42] Received request cmpl-8257c2741c6b4ae4b1e80f0614e82496-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:32 [async_llm.py:261] Added request cmpl-8257c2741c6b4ae4b1e80f0614e82496-0.
INFO 03-02 00:35:33 [logger.py:42] Received request cmpl-8e3df31693ce4282b4a6bb8b6ca26e5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:33 [async_llm.py:261] Added request cmpl-8e3df31693ce4282b4a6bb8b6ca26e5f-0.
INFO 03-02 00:35:34 [logger.py:42] Received request cmpl-7c80f17bf40245f3b97cc33103d95a20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:34 [async_llm.py:261] Added request cmpl-7c80f17bf40245f3b97cc33103d95a20-0.
INFO 03-02 00:35:35 [logger.py:42] Received request cmpl-b461241934e9483f89744768c7ca853b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:35 [async_llm.py:261] Added request cmpl-b461241934e9483f89744768c7ca853b-0.
INFO 03-02 00:35:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:36 [logger.py:42] Received request cmpl-ec3ab87cdab64f4a95b8c423fe9d387a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:36 [async_llm.py:261] Added request cmpl-ec3ab87cdab64f4a95b8c423fe9d387a-0.
INFO 03-02 00:35:37 [logger.py:42] Received request cmpl-e3bbe5f0d71b4719ac4cb1bad99a6e75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:37 [async_llm.py:261] Added request cmpl-e3bbe5f0d71b4719ac4cb1bad99a6e75-0.
INFO 03-02 00:35:39 [logger.py:42] Received request cmpl-ec5e1aec60b34cc3a3453623be30fcf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:39 [async_llm.py:261] Added request cmpl-ec5e1aec60b34cc3a3453623be30fcf4-0.
INFO 03-02 00:35:40 [logger.py:42] Received request cmpl-4b506ef67d2a40698591f1909595318f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:40 [async_llm.py:261] Added request cmpl-4b506ef67d2a40698591f1909595318f-0.
INFO 03-02 00:35:41 [logger.py:42] Received request cmpl-205ed2e188e147f0ac8b76914afc375b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:41 [async_llm.py:261] Added request cmpl-205ed2e188e147f0ac8b76914afc375b-0.
INFO 03-02 00:35:42 [logger.py:42] Received request cmpl-0c073b8a15b94efc9d291c769e66b592-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:42 [async_llm.py:261] Added request cmpl-0c073b8a15b94efc9d291c769e66b592-0.
INFO 03-02 00:35:43 [logger.py:42] Received request cmpl-0e71ca6ef95a4912b5fc369b68605bfd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:43 [async_llm.py:261] Added request cmpl-0e71ca6ef95a4912b5fc369b68605bfd-0.
INFO 03-02 00:35:44 [logger.py:42] Received request cmpl-4e41fae31610463289eee21191a2014b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:44 [async_llm.py:261] Added request cmpl-4e41fae31610463289eee21191a2014b-0.
INFO 03-02 00:35:45 [logger.py:42] Received request cmpl-df0a60de154a42f89ce6d97a175bd761-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:45 [async_llm.py:261] Added request cmpl-df0a60de154a42f89ce6d97a175bd761-0.
INFO 03-02 00:35:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:46 [logger.py:42] Received request cmpl-725291c9dc224b64b7201bf93da5fad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:46 [async_llm.py:261] Added request cmpl-725291c9dc224b64b7201bf93da5fad6-0.
INFO 03-02 00:35:47 [logger.py:42] Received request cmpl-82e05b1d21954939bee011d9ff7defa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:47 [async_llm.py:261] Added request cmpl-82e05b1d21954939bee011d9ff7defa1-0.
INFO 03-02 00:35:48 [logger.py:42] Received request cmpl-e504372fd2e64a2cbdb400b4c6576541-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:48 [async_llm.py:261] Added request cmpl-e504372fd2e64a2cbdb400b4c6576541-0.
INFO 03-02 00:35:50 [logger.py:42] Received request cmpl-44a32ff9001641d18f37605521e51810-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:50 [async_llm.py:261] Added request cmpl-44a32ff9001641d18f37605521e51810-0.
INFO 03-02 00:35:51 [logger.py:42] Received request cmpl-f47230a2a6054eeda921d73f244ab25c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:51 [async_llm.py:261] Added request cmpl-f47230a2a6054eeda921d73f244ab25c-0.
INFO 03-02 00:35:52 [logger.py:42] Received request cmpl-83a2401ad4a54bc0b63b89a67a652cdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:52 [async_llm.py:261] Added request cmpl-83a2401ad4a54bc0b63b89a67a652cdb-0.
INFO 03-02 00:35:53 [logger.py:42] Received request cmpl-39b5db79632a4fd8b3aaa595a843a656-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:53 [async_llm.py:261] Added request cmpl-39b5db79632a4fd8b3aaa595a843a656-0.
INFO 03-02 00:35:54 [logger.py:42] Received request cmpl-72731f7351c341c7b6fc806eb94bc93b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:54 [async_llm.py:261] Added request cmpl-72731f7351c341c7b6fc806eb94bc93b-0.
INFO 03-02 00:35:55 [logger.py:42] Received request cmpl-dd955d0586954bb4affd03b7baa28dda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:55 [async_llm.py:261] Added request cmpl-dd955d0586954bb4affd03b7baa28dda-0.
INFO 03-02 00:35:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:35:56 [logger.py:42] Received request cmpl-14373252adf0450988a455b1b37b5104-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:56 [async_llm.py:261] Added request cmpl-14373252adf0450988a455b1b37b5104-0.
INFO 03-02 00:35:57 [logger.py:42] Received request cmpl-30e09dec41f54ffabd70afda1faee371-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:57 [async_llm.py:261] Added request cmpl-30e09dec41f54ffabd70afda1faee371-0.
INFO 03-02 00:35:58 [logger.py:42] Received request cmpl-aef52318108a49e9802bdbfea7886755-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:58 [async_llm.py:261] Added request cmpl-aef52318108a49e9802bdbfea7886755-0.
INFO 03-02 00:35:59 [logger.py:42] Received request cmpl-26b92c697e1740ebba17258cec23edf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:35:59 [async_llm.py:261] Added request cmpl-26b92c697e1740ebba17258cec23edf8-0.
INFO 03-02 00:36:00 [logger.py:42] Received request cmpl-ae65b38bb05f4abcbe233cbb3d4e211f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:00 [async_llm.py:261] Added request cmpl-ae65b38bb05f4abcbe233cbb3d4e211f-0.
INFO 03-02 00:36:02 [logger.py:42] Received request cmpl-cb65ff11710449e7bf7df384d2d3023f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:02 [async_llm.py:261] Added request cmpl-cb65ff11710449e7bf7df384d2d3023f-0.
INFO 03-02 00:36:03 [logger.py:42] Received request cmpl-97450f91c71d4051a3ad8cc380dd3f7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:03 [async_llm.py:261] Added request cmpl-97450f91c71d4051a3ad8cc380dd3f7a-0.
INFO 03-02 00:36:04 [logger.py:42] Received request cmpl-a7b1edb20b7c4dbc8f21da0e058453cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:04 [async_llm.py:261] Added request cmpl-a7b1edb20b7c4dbc8f21da0e058453cd-0.
INFO 03-02 00:36:05 [logger.py:42] Received request cmpl-b5adb21a411a4d8ab7fa65d1b697140e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:05 [async_llm.py:261] Added request cmpl-b5adb21a411a4d8ab7fa65d1b697140e-0.
INFO 03-02 00:36:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:06 [logger.py:42] Received request cmpl-15d4efc948454a7d9360b695d22c3e96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:06 [async_llm.py:261] Added request cmpl-15d4efc948454a7d9360b695d22c3e96-0.
INFO 03-02 00:36:07 [logger.py:42] Received request cmpl-689dbc9746214d6d8c425fe3c734a8e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:07 [async_llm.py:261] Added request cmpl-689dbc9746214d6d8c425fe3c734a8e7-0.
INFO 03-02 00:36:08 [logger.py:42] Received request cmpl-ce202d27f8b74aafb7255e71ae8172f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:08 [async_llm.py:261] Added request cmpl-ce202d27f8b74aafb7255e71ae8172f6-0.
INFO 03-02 00:36:09 [logger.py:42] Received request cmpl-750d0d3cfde047c69158ed6169cac660-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:09 [async_llm.py:261] Added request cmpl-750d0d3cfde047c69158ed6169cac660-0.
INFO 03-02 00:36:10 [logger.py:42] Received request cmpl-35cb77fa5d8f4dc3aba71d6ea53193a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:10 [async_llm.py:261] Added request cmpl-35cb77fa5d8f4dc3aba71d6ea53193a9-0.
INFO 03-02 00:36:11 [logger.py:42] Received request cmpl-25b0bd8946eb476b82b98ad66ed32e11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:11 [async_llm.py:261] Added request cmpl-25b0bd8946eb476b82b98ad66ed32e11-0.
INFO 03-02 00:36:12 [logger.py:42] Received request cmpl-0e084e12634e4c40a47a9dcebb2a5a03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:12 [async_llm.py:261] Added request cmpl-0e084e12634e4c40a47a9dcebb2a5a03-0.
INFO 03-02 00:36:14 [logger.py:42] Received request cmpl-60d661f0c85c40af86a343400993968f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:14 [async_llm.py:261] Added request cmpl-60d661f0c85c40af86a343400993968f-0.
INFO 03-02 00:36:15 [logger.py:42] Received request cmpl-e20a20432c364c94b3cb412b7fa75b68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:15 [async_llm.py:261] Added request cmpl-e20a20432c364c94b3cb412b7fa75b68-0.
INFO 03-02 00:36:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:16 [logger.py:42] Received request cmpl-1b92cf65730443e5962233831a4fab87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:16 [async_llm.py:261] Added request cmpl-1b92cf65730443e5962233831a4fab87-0.
INFO 03-02 00:36:17 [logger.py:42] Received request cmpl-8fbd9758c55746379cf48e85746ab1c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:17 [async_llm.py:261] Added request cmpl-8fbd9758c55746379cf48e85746ab1c0-0.
INFO 03-02 00:36:18 [logger.py:42] Received request cmpl-79952bb0cacc41fb91abd1c23a2202a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:18 [async_llm.py:261] Added request cmpl-79952bb0cacc41fb91abd1c23a2202a1-0.
INFO 03-02 00:36:19 [logger.py:42] Received request cmpl-bb4984253a734647a71ab558f0453f12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:19 [async_llm.py:261] Added request cmpl-bb4984253a734647a71ab558f0453f12-0.
INFO 03-02 00:36:20 [logger.py:42] Received request cmpl-6be7d7aed12e497ea5176a603440921c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:20 [async_llm.py:261] Added request cmpl-6be7d7aed12e497ea5176a603440921c-0.
INFO 03-02 00:36:21 [logger.py:42] Received request cmpl-c9ff12c1a54245e5b994b9a5724cd068-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:21 [async_llm.py:261] Added request cmpl-c9ff12c1a54245e5b994b9a5724cd068-0.
INFO 03-02 00:36:22 [logger.py:42] Received request cmpl-01a0bc8202344fc887d1f72f16f030c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:22 [async_llm.py:261] Added request cmpl-01a0bc8202344fc887d1f72f16f030c7-0.
INFO 03-02 00:36:23 [logger.py:42] Received request cmpl-5c6627243d294a7d996bf326996d1f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:23 [async_llm.py:261] Added request cmpl-5c6627243d294a7d996bf326996d1f9b-0.
INFO 03-02 00:36:25 [logger.py:42] Received request cmpl-63c0622b1fed489b8870ad764b695ee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:25 [async_llm.py:261] Added request cmpl-63c0622b1fed489b8870ad764b695ee2-0.
INFO 03-02 00:36:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:26 [logger.py:42] Received request cmpl-4cce5c2b030a46468f00a43a0448a37e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:26 [async_llm.py:261] Added request cmpl-4cce5c2b030a46468f00a43a0448a37e-0.
INFO 03-02 00:36:27 [logger.py:42] Received request cmpl-864fb0e2cf504660b983550d4a630263-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:27 [async_llm.py:261] Added request cmpl-864fb0e2cf504660b983550d4a630263-0.
INFO 03-02 00:36:28 [logger.py:42] Received request cmpl-0bddf351b92e4acc840d9d0f3fb59019-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:28 [async_llm.py:261] Added request cmpl-0bddf351b92e4acc840d9d0f3fb59019-0.
INFO 03-02 00:36:29 [logger.py:42] Received request cmpl-30aac795dfd146478e1573bce92d179b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:29 [async_llm.py:261] Added request cmpl-30aac795dfd146478e1573bce92d179b-0.
INFO 03-02 00:36:30 [logger.py:42] Received request cmpl-5bc79f0b162d40a4ab293112c80b1281-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:30 [async_llm.py:261] Added request cmpl-5bc79f0b162d40a4ab293112c80b1281-0.
INFO 03-02 00:36:31 [logger.py:42] Received request cmpl-0e85882ca8a24951816b2a9432dcaad0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:31 [async_llm.py:261] Added request cmpl-0e85882ca8a24951816b2a9432dcaad0-0.
INFO 03-02 00:36:32 [logger.py:42] Received request cmpl-c587376e19224e0ba863513d31435134-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:32 [async_llm.py:261] Added request cmpl-c587376e19224e0ba863513d31435134-0.
INFO 03-02 00:36:33 [logger.py:42] Received request cmpl-86d3edc736444af5afd55333dcb5a51f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:33 [async_llm.py:261] Added request cmpl-86d3edc736444af5afd55333dcb5a51f-0.
INFO 03-02 00:36:34 [logger.py:42] Received request cmpl-88f6f0aebba84186891f32871bf2d3ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:34 [async_llm.py:261] Added request cmpl-88f6f0aebba84186891f32871bf2d3ec-0.
INFO 03-02 00:36:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:35 [logger.py:42] Received request cmpl-9fd371eecbe74ec4b316e3b9da78bb13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:35 [async_llm.py:261] Added request cmpl-9fd371eecbe74ec4b316e3b9da78bb13-0.
INFO 03-02 00:36:37 [logger.py:42] Received request cmpl-388fd1c0c2914107810820cc12684f1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:37 [async_llm.py:261] Added request cmpl-388fd1c0c2914107810820cc12684f1b-0.
INFO 03-02 00:36:38 [logger.py:42] Received request cmpl-b93757661e5d40a8b0199ab1ee04610f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:38 [async_llm.py:261] Added request cmpl-b93757661e5d40a8b0199ab1ee04610f-0.
INFO 03-02 00:36:39 [logger.py:42] Received request cmpl-5a5b2d4e5a7b42fa9e59e403f97f2df6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:39 [async_llm.py:261] Added request cmpl-5a5b2d4e5a7b42fa9e59e403f97f2df6-0.
INFO 03-02 00:36:40 [logger.py:42] Received request cmpl-98fef5ca44bb4b82af9b2303e3090b1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:40 [async_llm.py:261] Added request cmpl-98fef5ca44bb4b82af9b2303e3090b1a-0.
INFO 03-02 00:36:41 [logger.py:42] Received request cmpl-fe3c491966194ed6ad049053bad8b6f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:41 [async_llm.py:261] Added request cmpl-fe3c491966194ed6ad049053bad8b6f9-0.
INFO 03-02 00:36:42 [logger.py:42] Received request cmpl-fa2f1247740b4998b03e3cc1662bd7ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:42 [async_llm.py:261] Added request cmpl-fa2f1247740b4998b03e3cc1662bd7ad-0.
INFO 03-02 00:36:43 [logger.py:42] Received request cmpl-219e05ea38314f2c9e151f907094b108-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:43 [async_llm.py:261] Added request cmpl-219e05ea38314f2c9e151f907094b108-0.
INFO 03-02 00:36:44 [logger.py:42] Received request cmpl-8dcd77d69b05416eaf5dab2e1805bc51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:44 [async_llm.py:261] Added request cmpl-8dcd77d69b05416eaf5dab2e1805bc51-0.
INFO 03-02 00:36:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:45 [logger.py:42] Received request cmpl-c0e5ea9469334c2f8e89a3acd8a633b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:45 [async_llm.py:261] Added request cmpl-c0e5ea9469334c2f8e89a3acd8a633b5-0.
INFO 03-02 00:36:46 [logger.py:42] Received request cmpl-613c3303f300417c9d541400b6f160fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:46 [async_llm.py:261] Added request cmpl-613c3303f300417c9d541400b6f160fd-0.
INFO 03-02 00:36:48 [logger.py:42] Received request cmpl-8efc31a107ee4ee6811cdeac56e5ef3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:48 [async_llm.py:261] Added request cmpl-8efc31a107ee4ee6811cdeac56e5ef3b-0.
INFO 03-02 00:36:49 [logger.py:42] Received request cmpl-94381d78d838467c9f94f84b42684e8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:49 [async_llm.py:261] Added request cmpl-94381d78d838467c9f94f84b42684e8c-0.
INFO 03-02 00:36:50 [logger.py:42] Received request cmpl-71d7119bf0f94ac8ae4f77a22d710133-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:50 [async_llm.py:261] Added request cmpl-71d7119bf0f94ac8ae4f77a22d710133-0.
INFO 03-02 00:36:51 [logger.py:42] Received request cmpl-4ce593dccf3e4dbda10bb981f94de6a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:51 [async_llm.py:261] Added request cmpl-4ce593dccf3e4dbda10bb981f94de6a3-0.
INFO 03-02 00:36:52 [logger.py:42] Received request cmpl-955e24092a4b4bca8102fdd54fe8a046-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:52 [async_llm.py:261] Added request cmpl-955e24092a4b4bca8102fdd54fe8a046-0.
INFO 03-02 00:36:53 [logger.py:42] Received request cmpl-1de2a708e4da48f3b0fd6d1bbc8536f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:53 [async_llm.py:261] Added request cmpl-1de2a708e4da48f3b0fd6d1bbc8536f8-0.
INFO 03-02 00:36:54 [logger.py:42] Received request cmpl-6c50434a6e7f49039453e82ffe08a8a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:54 [async_llm.py:261] Added request cmpl-6c50434a6e7f49039453e82ffe08a8a3-0.
INFO 03-02 00:36:55 [logger.py:42] Received request cmpl-f32008ed589949c08e9bb7f7deae97b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:55 [async_llm.py:261] Added request cmpl-f32008ed589949c08e9bb7f7deae97b5-0.
INFO 03-02 00:36:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:36:56 [logger.py:42] Received request cmpl-c43aec550dc4446ca2df81a7778e9d8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:56 [async_llm.py:261] Added request cmpl-c43aec550dc4446ca2df81a7778e9d8e-0.
INFO 03-02 00:36:57 [logger.py:42] Received request cmpl-cd3892439bc040e6b716afa14762423a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:57 [async_llm.py:261] Added request cmpl-cd3892439bc040e6b716afa14762423a-0.
INFO 03-02 00:36:58 [logger.py:42] Received request cmpl-fe711779a189469ea951e435d5949d05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:36:58 [async_llm.py:261] Added request cmpl-fe711779a189469ea951e435d5949d05-0.
INFO 03-02 00:37:00 [logger.py:42] Received request cmpl-94a70250d4714d5dbc41609ccecc23a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:00 [async_llm.py:261] Added request cmpl-94a70250d4714d5dbc41609ccecc23a0-0.
INFO 03-02 00:37:01 [logger.py:42] Received request cmpl-452bd028be804ed390a1f18d9130b553-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:01 [async_llm.py:261] Added request cmpl-452bd028be804ed390a1f18d9130b553-0.
INFO 03-02 00:37:02 [logger.py:42] Received request cmpl-5d39965eae05455d8e08353c5f32269a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:02 [async_llm.py:261] Added request cmpl-5d39965eae05455d8e08353c5f32269a-0.
INFO 03-02 00:37:03 [logger.py:42] Received request cmpl-8f3c64417b424c03bca78d1334d2117c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:03 [async_llm.py:261] Added request cmpl-8f3c64417b424c03bca78d1334d2117c-0.
INFO 03-02 00:37:04 [logger.py:42] Received request cmpl-90093d32eaf8410e8501253def2a725e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:04 [async_llm.py:261] Added request cmpl-90093d32eaf8410e8501253def2a725e-0.
INFO 03-02 00:37:05 [logger.py:42] Received request cmpl-d28ef4af1e7046b69045727279d1fd5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:05 [async_llm.py:261] Added request cmpl-d28ef4af1e7046b69045727279d1fd5c-0.
INFO 03-02 00:37:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:06 [logger.py:42] Received request cmpl-69a0899bfd5040b09ebf4383d43cc549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:06 [async_llm.py:261] Added request cmpl-69a0899bfd5040b09ebf4383d43cc549-0.
INFO 03-02 00:37:07 [logger.py:42] Received request cmpl-aff9c6f8e5734040b8ef2f022c19ef8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:07 [async_llm.py:261] Added request cmpl-aff9c6f8e5734040b8ef2f022c19ef8c-0.
INFO 03-02 00:37:08 [logger.py:42] Received request cmpl-a670c5ddea7a4de4bb657069ffbae4db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:08 [async_llm.py:261] Added request cmpl-a670c5ddea7a4de4bb657069ffbae4db-0.
INFO 03-02 00:37:09 [logger.py:42] Received request cmpl-8f903303bee84ea59a59c104df581fb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:09 [async_llm.py:261] Added request cmpl-8f903303bee84ea59a59c104df581fb7-0.
INFO 03-02 00:37:10 [logger.py:42] Received request cmpl-2796e5b5c7fd4cda976dc65e4c9e7bc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:10 [async_llm.py:261] Added request cmpl-2796e5b5c7fd4cda976dc65e4c9e7bc6-0.
INFO 03-02 00:37:12 [logger.py:42] Received request cmpl-077760692f6044c89014124b2ea9b4f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:12 [async_llm.py:261] Added request cmpl-077760692f6044c89014124b2ea9b4f5-0.
INFO 03-02 00:37:13 [logger.py:42] Received request cmpl-dce7ea5d45d449f6aa6161fdf4848bc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:13 [async_llm.py:261] Added request cmpl-dce7ea5d45d449f6aa6161fdf4848bc1-0.
INFO 03-02 00:37:14 [logger.py:42] Received request cmpl-9625574cd55f45728c4ab202761aa6cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:14 [async_llm.py:261] Added request cmpl-9625574cd55f45728c4ab202761aa6cf-0.
INFO 03-02 00:37:15 [logger.py:42] Received request cmpl-04b5ac78d6c54472a02556426d30fc76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:15 [async_llm.py:261] Added request cmpl-04b5ac78d6c54472a02556426d30fc76-0.
INFO 03-02 00:37:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:16 [logger.py:42] Received request cmpl-e1699f2cf7b84120b06d740b1340a732-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:16 [async_llm.py:261] Added request cmpl-e1699f2cf7b84120b06d740b1340a732-0.
INFO 03-02 00:37:17 [logger.py:42] Received request cmpl-ee9285e3022341febb7df216d84672c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:17 [async_llm.py:261] Added request cmpl-ee9285e3022341febb7df216d84672c1-0.
INFO 03-02 00:37:18 [logger.py:42] Received request cmpl-4287f343037c4ea0ba9145b02ef13420-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:18 [async_llm.py:261] Added request cmpl-4287f343037c4ea0ba9145b02ef13420-0.
INFO 03-02 00:37:19 [logger.py:42] Received request cmpl-f771e54b1ba946cc9c9219e46e179e5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:19 [async_llm.py:261] Added request cmpl-f771e54b1ba946cc9c9219e46e179e5d-0.
INFO 03-02 00:37:20 [logger.py:42] Received request cmpl-897afa6541e94e029b569783e1f5d22a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:20 [async_llm.py:261] Added request cmpl-897afa6541e94e029b569783e1f5d22a-0.
INFO 03-02 00:37:21 [logger.py:42] Received request cmpl-dbb4c7f3182e46388b640b58d70900f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:21 [async_llm.py:261] Added request cmpl-dbb4c7f3182e46388b640b58d70900f1-0.
INFO 03-02 00:37:23 [logger.py:42] Received request cmpl-a69b2a3212ee465ebb6798e942e89fef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:23 [async_llm.py:261] Added request cmpl-a69b2a3212ee465ebb6798e942e89fef-0.
INFO 03-02 00:37:24 [logger.py:42] Received request cmpl-97174d3bac7c4019b6265efa621b04bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:24 [async_llm.py:261] Added request cmpl-97174d3bac7c4019b6265efa621b04bf-0.
INFO 03-02 00:37:25 [logger.py:42] Received request cmpl-f873c0b7f13e49d8aa9edc1f92c9a0db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:25 [async_llm.py:261] Added request cmpl-f873c0b7f13e49d8aa9edc1f92c9a0db-0.
INFO 03-02 00:37:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:26 [logger.py:42] Received request cmpl-9ba4ae2f849b450c80177e4f9f51d18f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:26 [async_llm.py:261] Added request cmpl-9ba4ae2f849b450c80177e4f9f51d18f-0.
INFO 03-02 00:37:27 [logger.py:42] Received request cmpl-5b5670f320974f53b4b06714abd80345-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:27 [async_llm.py:261] Added request cmpl-5b5670f320974f53b4b06714abd80345-0.
INFO 03-02 00:37:28 [logger.py:42] Received request cmpl-d4cf28ee244545eeb403fac2bb68d57f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:28 [async_llm.py:261] Added request cmpl-d4cf28ee244545eeb403fac2bb68d57f-0.
INFO 03-02 00:37:29 [logger.py:42] Received request cmpl-d8cba836f8284415aac0cef26b64f3bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:29 [async_llm.py:261] Added request cmpl-d8cba836f8284415aac0cef26b64f3bd-0.
INFO 03-02 00:37:30 [logger.py:42] Received request cmpl-4f5ca7a781e94533a980728239762398-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:30 [async_llm.py:261] Added request cmpl-4f5ca7a781e94533a980728239762398-0.
INFO 03-02 00:37:31 [logger.py:42] Received request cmpl-690efc41ecd94de0bd31ba82e9835bee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:31 [async_llm.py:261] Added request cmpl-690efc41ecd94de0bd31ba82e9835bee-0.
INFO 03-02 00:37:32 [logger.py:42] Received request cmpl-7eb1a9845ad14210a10313e683b3d419-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:32 [async_llm.py:261] Added request cmpl-7eb1a9845ad14210a10313e683b3d419-0.
INFO 03-02 00:37:33 [logger.py:42] Received request cmpl-9c4cb6ba2d07472294f31a0e151a30d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:33 [async_llm.py:261] Added request cmpl-9c4cb6ba2d07472294f31a0e151a30d1-0.
INFO 03-02 00:37:35 [logger.py:42] Received request cmpl-f5740cea3a3f46419d4c288c437ce7d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:35 [async_llm.py:261] Added request cmpl-f5740cea3a3f46419d4c288c437ce7d4-0.
INFO 03-02 00:37:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:36 [logger.py:42] Received request cmpl-2d547cf4bc45444fbfb169a761495f8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:36 [async_llm.py:261] Added request cmpl-2d547cf4bc45444fbfb169a761495f8a-0.
INFO 03-02 00:37:37 [logger.py:42] Received request cmpl-29ec7fef36904881bb57e34ffb7dcc23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:37 [async_llm.py:261] Added request cmpl-29ec7fef36904881bb57e34ffb7dcc23-0.
INFO 03-02 00:37:38 [logger.py:42] Received request cmpl-2e2acf381c6f4dbda6dbc2c7bcf23540-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:38 [async_llm.py:261] Added request cmpl-2e2acf381c6f4dbda6dbc2c7bcf23540-0.
INFO 03-02 00:37:39 [logger.py:42] Received request cmpl-6ec7811901524dc38cd5bffbbcb9e17d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:39 [async_llm.py:261] Added request cmpl-6ec7811901524dc38cd5bffbbcb9e17d-0.
INFO 03-02 00:37:40 [logger.py:42] Received request cmpl-22fc9c299fda4c32885036b5c21e6a7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:40 [async_llm.py:261] Added request cmpl-22fc9c299fda4c32885036b5c21e6a7a-0.
INFO 03-02 00:37:41 [logger.py:42] Received request cmpl-d08f2a3a16674fd8a3d5761ce80bc867-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:41 [async_llm.py:261] Added request cmpl-d08f2a3a16674fd8a3d5761ce80bc867-0.
INFO 03-02 00:37:42 [logger.py:42] Received request cmpl-07bab7e782824c87bcb8dfd6660b3339-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:42 [async_llm.py:261] Added request cmpl-07bab7e782824c87bcb8dfd6660b3339-0.
INFO 03-02 00:37:43 [logger.py:42] Received request cmpl-ee5a6fceab664a998658827e6ff4eaab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:43 [async_llm.py:261] Added request cmpl-ee5a6fceab664a998658827e6ff4eaab-0.
INFO 03-02 00:37:44 [logger.py:42] Received request cmpl-aaeb8a18fae94880b52c5483b34da311-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:44 [async_llm.py:261] Added request cmpl-aaeb8a18fae94880b52c5483b34da311-0.
INFO 03-02 00:37:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:46 [logger.py:42] Received request cmpl-5feac76680e348b5b43924449aa26f59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:46 [async_llm.py:261] Added request cmpl-5feac76680e348b5b43924449aa26f59-0.
INFO 03-02 00:37:47 [logger.py:42] Received request cmpl-1256a218333749689c3b4488834794ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:47 [async_llm.py:261] Added request cmpl-1256a218333749689c3b4488834794ce-0.
INFO 03-02 00:37:48 [logger.py:42] Received request cmpl-9beb9846fcd24dfd9a7bb1ac7483441f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:48 [async_llm.py:261] Added request cmpl-9beb9846fcd24dfd9a7bb1ac7483441f-0.
INFO 03-02 00:37:49 [logger.py:42] Received request cmpl-7759791e94d04d2cad41b49a041e10c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:49 [async_llm.py:261] Added request cmpl-7759791e94d04d2cad41b49a041e10c0-0.
INFO 03-02 00:37:50 [logger.py:42] Received request cmpl-0238042e339b4dd997656854e23a6667-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:50 [async_llm.py:261] Added request cmpl-0238042e339b4dd997656854e23a6667-0.
INFO 03-02 00:37:51 [logger.py:42] Received request cmpl-ff222fb395ed4ec9874577127237efd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:51 [async_llm.py:261] Added request cmpl-ff222fb395ed4ec9874577127237efd1-0.
INFO 03-02 00:37:52 [logger.py:42] Received request cmpl-80bafe9225524264b918eb8f722b0bab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:52 [async_llm.py:261] Added request cmpl-80bafe9225524264b918eb8f722b0bab-0.
INFO 03-02 00:37:53 [logger.py:42] Received request cmpl-42527fd4c5a0426fb22681efb7843fd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:53 [async_llm.py:261] Added request cmpl-42527fd4c5a0426fb22681efb7843fd2-0.
INFO 03-02 00:37:54 [logger.py:42] Received request cmpl-42ce6a425b624a60a687b54fab80799c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:54 [async_llm.py:261] Added request cmpl-42ce6a425b624a60a687b54fab80799c-0.
INFO 03-02 00:37:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:37:55 [logger.py:42] Received request cmpl-d6251ddf93fa4af68f5fcc6a6103ea38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:55 [async_llm.py:261] Added request cmpl-d6251ddf93fa4af68f5fcc6a6103ea38-0.
INFO 03-02 00:37:57 [logger.py:42] Received request cmpl-214f9c295a924307a6d8e4f6fb3b2565-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:57 [async_llm.py:261] Added request cmpl-214f9c295a924307a6d8e4f6fb3b2565-0.
INFO 03-02 00:37:58 [logger.py:42] Received request cmpl-45c3cb1a8bc5466394ec4aa4cad082b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:58 [async_llm.py:261] Added request cmpl-45c3cb1a8bc5466394ec4aa4cad082b6-0.
INFO 03-02 00:37:59 [logger.py:42] Received request cmpl-56c098931573435eb874055afd1099ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:37:59 [async_llm.py:261] Added request cmpl-56c098931573435eb874055afd1099ce-0.
INFO 03-02 00:38:00 [logger.py:42] Received request cmpl-b79345c5aff9411c95d7e73b2ea4900f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:00 [async_llm.py:261] Added request cmpl-b79345c5aff9411c95d7e73b2ea4900f-0.
INFO 03-02 00:38:01 [logger.py:42] Received request cmpl-61e0b80d7c654b7685830c27ea1d7c27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:01 [async_llm.py:261] Added request cmpl-61e0b80d7c654b7685830c27ea1d7c27-0.
INFO 03-02 00:38:02 [logger.py:42] Received request cmpl-c2855db91eb749459317837b3a7cacf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:02 [async_llm.py:261] Added request cmpl-c2855db91eb749459317837b3a7cacf6-0.
INFO 03-02 00:38:03 [logger.py:42] Received request cmpl-9db9842a47414145b07d861dfbc5eb8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:03 [async_llm.py:261] Added request cmpl-9db9842a47414145b07d861dfbc5eb8a-0.
INFO 03-02 00:38:04 [logger.py:42] Received request cmpl-7e5ec8728e7f4900a60ae769992992ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:04 [async_llm.py:261] Added request cmpl-7e5ec8728e7f4900a60ae769992992ba-0.
INFO 03-02 00:38:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:05 [logger.py:42] Received request cmpl-443edcba411a4152bae7748a64bcb72e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:05 [async_llm.py:261] Added request cmpl-443edcba411a4152bae7748a64bcb72e-0.
INFO 03-02 00:38:06 [logger.py:42] Received request cmpl-f1f536c89eb1414c8eaf6e89820a53b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:06 [async_llm.py:261] Added request cmpl-f1f536c89eb1414c8eaf6e89820a53b3-0.
INFO 03-02 00:38:08 [logger.py:42] Received request cmpl-3eaf0e6319bc48b983346ac2b909d225-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:08 [async_llm.py:261] Added request cmpl-3eaf0e6319bc48b983346ac2b909d225-0.
INFO 03-02 00:38:09 [logger.py:42] Received request cmpl-cdc2434c4da64a1b80ba05c638c89c2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:09 [async_llm.py:261] Added request cmpl-cdc2434c4da64a1b80ba05c638c89c2d-0.
INFO 03-02 00:38:10 [logger.py:42] Received request cmpl-7ea0679e9dcd4c18a74b9923268c8315-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:10 [async_llm.py:261] Added request cmpl-7ea0679e9dcd4c18a74b9923268c8315-0.
INFO 03-02 00:38:11 [logger.py:42] Received request cmpl-280a99f8219e4685964c78a6aea551bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:11 [async_llm.py:261] Added request cmpl-280a99f8219e4685964c78a6aea551bb-0.
INFO 03-02 00:38:12 [logger.py:42] Received request cmpl-26e8e27c520843e78541c6b117f56cb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:12 [async_llm.py:261] Added request cmpl-26e8e27c520843e78541c6b117f56cb1-0.
INFO 03-02 00:38:13 [logger.py:42] Received request cmpl-dcae26424075496192a2cf28fbc15d33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:13 [async_llm.py:261] Added request cmpl-dcae26424075496192a2cf28fbc15d33-0.
INFO 03-02 00:38:14 [logger.py:42] Received request cmpl-93a770bc06ca476898aef2b8eebcd68c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:14 [async_llm.py:261] Added request cmpl-93a770bc06ca476898aef2b8eebcd68c-0.
INFO 03-02 00:38:15 [logger.py:42] Received request cmpl-9144bba5db934ec9b8f889f6f3812bfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:15 [async_llm.py:261] Added request cmpl-9144bba5db934ec9b8f889f6f3812bfc-0.
INFO 03-02 00:38:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:16 [logger.py:42] Received request cmpl-56623db3f2534aa9851df92d14bcb070-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:16 [async_llm.py:261] Added request cmpl-56623db3f2534aa9851df92d14bcb070-0.
INFO 03-02 00:38:17 [logger.py:42] Received request cmpl-6007716130f74aa9bbe55824bc743ec2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:17 [async_llm.py:261] Added request cmpl-6007716130f74aa9bbe55824bc743ec2-0.
INFO 03-02 00:38:18 [logger.py:42] Received request cmpl-3db3d1a994e04027bbf4b2c772b0d0b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:18 [async_llm.py:261] Added request cmpl-3db3d1a994e04027bbf4b2c772b0d0b5-0.
INFO 03-02 00:38:20 [logger.py:42] Received request cmpl-d78f8f0fc66642239b5c6c7d245ebbe8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:20 [async_llm.py:261] Added request cmpl-d78f8f0fc66642239b5c6c7d245ebbe8-0.
INFO 03-02 00:38:21 [logger.py:42] Received request cmpl-e37360224098464ba4d4c60529d81284-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:21 [async_llm.py:261] Added request cmpl-e37360224098464ba4d4c60529d81284-0.
INFO 03-02 00:38:22 [logger.py:42] Received request cmpl-6241762f6071421cbe891cbdb98dc604-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:22 [async_llm.py:261] Added request cmpl-6241762f6071421cbe891cbdb98dc604-0.
INFO 03-02 00:38:23 [logger.py:42] Received request cmpl-bb915612aebf40d2bfeafc28ad503aba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:23 [async_llm.py:261] Added request cmpl-bb915612aebf40d2bfeafc28ad503aba-0.
INFO 03-02 00:38:24 [logger.py:42] Received request cmpl-fd011c2be6a54dc78263a834b000b7a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:24 [async_llm.py:261] Added request cmpl-fd011c2be6a54dc78263a834b000b7a9-0.
INFO 03-02 00:38:25 [logger.py:42] Received request cmpl-81182dc792a145b9ad090984a7182ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:25 [async_llm.py:261] Added request cmpl-81182dc792a145b9ad090984a7182ddd-0.
INFO 03-02 00:38:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:26 [logger.py:42] Received request cmpl-4c1d4dd19c154b508c415136fda3304d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:26 [async_llm.py:261] Added request cmpl-4c1d4dd19c154b508c415136fda3304d-0.
INFO 03-02 00:38:27 [logger.py:42] Received request cmpl-f954ade2eb524455814354c3444068c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:27 [async_llm.py:261] Added request cmpl-f954ade2eb524455814354c3444068c6-0.
INFO 03-02 00:38:28 [logger.py:42] Received request cmpl-a5f3d772ceff4310804717fb6ce111e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:28 [async_llm.py:261] Added request cmpl-a5f3d772ceff4310804717fb6ce111e8-0.
INFO 03-02 00:38:29 [logger.py:42] Received request cmpl-3dbbe864cce9473b8a7121e1c07e127b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:29 [async_llm.py:261] Added request cmpl-3dbbe864cce9473b8a7121e1c07e127b-0.
INFO 03-02 00:38:31 [logger.py:42] Received request cmpl-37702178c6c3499fb04aa9d035eddaa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:31 [async_llm.py:261] Added request cmpl-37702178c6c3499fb04aa9d035eddaa7-0.
INFO 03-02 00:38:32 [logger.py:42] Received request cmpl-52bb8ed5c1bd4dfb85e5f8e058b75bbf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:32 [async_llm.py:261] Added request cmpl-52bb8ed5c1bd4dfb85e5f8e058b75bbf-0.
INFO 03-02 00:38:33 [logger.py:42] Received request cmpl-1342a199344f4b7daa83bf7ebffc0b37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:33 [async_llm.py:261] Added request cmpl-1342a199344f4b7daa83bf7ebffc0b37-0.
INFO 03-02 00:38:34 [logger.py:42] Received request cmpl-4a26c3b0b7be4233ac1806530003c82e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:34 [async_llm.py:261] Added request cmpl-4a26c3b0b7be4233ac1806530003c82e-0.
INFO 03-02 00:38:35 [logger.py:42] Received request cmpl-7439ccd1f4024b6fb44d10022b744096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:35 [async_llm.py:261] Added request cmpl-7439ccd1f4024b6fb44d10022b744096-0.
INFO 03-02 00:38:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:36 [logger.py:42] Received request cmpl-e9fb3e861c764a519b8fe43aadeac720-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:36 [async_llm.py:261] Added request cmpl-e9fb3e861c764a519b8fe43aadeac720-0.
INFO 03-02 00:38:37 [logger.py:42] Received request cmpl-9fb5dcbc2d294cf2af7c0741bf360659-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:37 [async_llm.py:261] Added request cmpl-9fb5dcbc2d294cf2af7c0741bf360659-0.
INFO 03-02 00:38:38 [logger.py:42] Received request cmpl-7ab1c8d4b6154d929968f9d0a5f85e2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:38 [async_llm.py:261] Added request cmpl-7ab1c8d4b6154d929968f9d0a5f85e2a-0.
INFO 03-02 00:38:39 [logger.py:42] Received request cmpl-655607dd890546248da0dcea33aaabc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:39 [async_llm.py:261] Added request cmpl-655607dd890546248da0dcea33aaabc7-0.
INFO 03-02 00:38:40 [logger.py:42] Received request cmpl-85f7a12b38f845638ba9ad4469924cd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:40 [async_llm.py:261] Added request cmpl-85f7a12b38f845638ba9ad4469924cd7-0.
INFO 03-02 00:38:41 [logger.py:42] Received request cmpl-7c2e3443f4bd4f17b11886f10bcf04ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:41 [async_llm.py:261] Added request cmpl-7c2e3443f4bd4f17b11886f10bcf04ff-0.
INFO 03-02 00:38:43 [logger.py:42] Received request cmpl-ec7f7d08b57d4e31ac38f39d6631e4a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:43 [async_llm.py:261] Added request cmpl-ec7f7d08b57d4e31ac38f39d6631e4a1-0.
INFO 03-02 00:38:44 [logger.py:42] Received request cmpl-ff3c961f3a224114a96a622a179c96ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:44 [async_llm.py:261] Added request cmpl-ff3c961f3a224114a96a622a179c96ab-0.
INFO 03-02 00:38:45 [logger.py:42] Received request cmpl-c6682e84a87f42de92f149c5975467e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:45 [async_llm.py:261] Added request cmpl-c6682e84a87f42de92f149c5975467e1-0.
INFO 03-02 00:38:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:46 [logger.py:42] Received request cmpl-6bf3233af46541959eb62352380da677-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:46 [async_llm.py:261] Added request cmpl-6bf3233af46541959eb62352380da677-0.
INFO 03-02 00:38:47 [logger.py:42] Received request cmpl-e6f4aca7782a4b1e8b42815b58eab322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:47 [async_llm.py:261] Added request cmpl-e6f4aca7782a4b1e8b42815b58eab322-0.
INFO 03-02 00:38:48 [logger.py:42] Received request cmpl-b81253cfe9cb48a693b52d98420e60ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:48 [async_llm.py:261] Added request cmpl-b81253cfe9cb48a693b52d98420e60ca-0.
INFO 03-02 00:38:49 [logger.py:42] Received request cmpl-2aa9224035634550af47124e389c8164-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:49 [async_llm.py:261] Added request cmpl-2aa9224035634550af47124e389c8164-0.
INFO 03-02 00:38:50 [logger.py:42] Received request cmpl-a124d0b7296445658c2e09c8d1e47b82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:50 [async_llm.py:261] Added request cmpl-a124d0b7296445658c2e09c8d1e47b82-0.
INFO 03-02 00:38:51 [logger.py:42] Received request cmpl-ec11e4d90ab64bdbb798f82340e0fa7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:51 [async_llm.py:261] Added request cmpl-ec11e4d90ab64bdbb798f82340e0fa7a-0.
INFO 03-02 00:38:52 [logger.py:42] Received request cmpl-05fbc6f8083c42f38fa6551ead32a79f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:52 [async_llm.py:261] Added request cmpl-05fbc6f8083c42f38fa6551ead32a79f-0.
INFO 03-02 00:38:54 [logger.py:42] Received request cmpl-cf83f7d02313446390777c069801dc10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:54 [async_llm.py:261] Added request cmpl-cf83f7d02313446390777c069801dc10-0.
INFO 03-02 00:38:55 [logger.py:42] Received request cmpl-5ea51247a9944921a20bd207b9709a99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:55 [async_llm.py:261] Added request cmpl-5ea51247a9944921a20bd207b9709a99-0.
INFO 03-02 00:38:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:38:56 [logger.py:42] Received request cmpl-c9e7332f813d4501883915a752f7bb0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:56 [async_llm.py:261] Added request cmpl-c9e7332f813d4501883915a752f7bb0a-0.
INFO 03-02 00:38:57 [logger.py:42] Received request cmpl-9a753ceb092b482d9472241dc5f295c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:57 [async_llm.py:261] Added request cmpl-9a753ceb092b482d9472241dc5f295c2-0.
INFO 03-02 00:38:58 [logger.py:42] Received request cmpl-821c1d1254294a53aaf29fbcb6a07819-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:58 [async_llm.py:261] Added request cmpl-821c1d1254294a53aaf29fbcb6a07819-0.
INFO 03-02 00:38:59 [logger.py:42] Received request cmpl-a5113dc8f1144ab68651f6c65360a89f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:38:59 [async_llm.py:261] Added request cmpl-a5113dc8f1144ab68651f6c65360a89f-0.
INFO 03-02 00:39:00 [logger.py:42] Received request cmpl-1b29f733eacb4898adb7fe08cecdf706-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:00 [async_llm.py:261] Added request cmpl-1b29f733eacb4898adb7fe08cecdf706-0.
INFO 03-02 00:39:01 [logger.py:42] Received request cmpl-c66123a6a1c040f99a06a81e25e7c877-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:01 [async_llm.py:261] Added request cmpl-c66123a6a1c040f99a06a81e25e7c877-0.
INFO 03-02 00:39:02 [logger.py:42] Received request cmpl-b983f016c0254b0b9b19481cef24d9a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:02 [async_llm.py:261] Added request cmpl-b983f016c0254b0b9b19481cef24d9a8-0.
INFO 03-02 00:39:03 [logger.py:42] Received request cmpl-3ee1165334fd4df1b7a5867ed5d6ad99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:03 [async_llm.py:261] Added request cmpl-3ee1165334fd4df1b7a5867ed5d6ad99-0.
INFO 03-02 00:39:04 [logger.py:42] Received request cmpl-238ca9106b2e4e01b701832cc36eaf44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:04 [async_llm.py:261] Added request cmpl-238ca9106b2e4e01b701832cc36eaf44-0.
INFO 03-02 00:39:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:06 [logger.py:42] Received request cmpl-8ed836032ff646ce851b471ba7b6ddf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:06 [async_llm.py:261] Added request cmpl-8ed836032ff646ce851b471ba7b6ddf3-0.
INFO 03-02 00:39:07 [logger.py:42] Received request cmpl-0f2011000e7b407ca086bc9c890e33a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:07 [async_llm.py:261] Added request cmpl-0f2011000e7b407ca086bc9c890e33a4-0.
INFO 03-02 00:39:08 [logger.py:42] Received request cmpl-ae56c697363d4d738d668a9a5d6c92a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:08 [async_llm.py:261] Added request cmpl-ae56c697363d4d738d668a9a5d6c92a2-0.
INFO 03-02 00:39:09 [logger.py:42] Received request cmpl-ddc43d00703742aaa99c58926f6c147a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:09 [async_llm.py:261] Added request cmpl-ddc43d00703742aaa99c58926f6c147a-0.
INFO 03-02 00:39:10 [logger.py:42] Received request cmpl-69c2673700c24c1e95be8f9caa432344-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:10 [async_llm.py:261] Added request cmpl-69c2673700c24c1e95be8f9caa432344-0.
INFO 03-02 00:39:11 [logger.py:42] Received request cmpl-65d617e01e76458fb4df9c5e609ae44a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:11 [async_llm.py:261] Added request cmpl-65d617e01e76458fb4df9c5e609ae44a-0.
INFO 03-02 00:39:12 [logger.py:42] Received request cmpl-58618d14081d4db38a101c110915e8f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:12 [async_llm.py:261] Added request cmpl-58618d14081d4db38a101c110915e8f8-0.
INFO 03-02 00:39:13 [logger.py:42] Received request cmpl-00d758075893400b82ec0b52717fa1a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:13 [async_llm.py:261] Added request cmpl-00d758075893400b82ec0b52717fa1a7-0.
INFO 03-02 00:39:14 [logger.py:42] Received request cmpl-01ead4c8cc334740a323adfcba123695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:14 [async_llm.py:261] Added request cmpl-01ead4c8cc334740a323adfcba123695-0.
INFO 03-02 00:39:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:15 [logger.py:42] Received request cmpl-eb0093ec751f4a13bd08fc521c35d544-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:15 [async_llm.py:261] Added request cmpl-eb0093ec751f4a13bd08fc521c35d544-0.
INFO 03-02 00:39:17 [logger.py:42] Received request cmpl-53174e2a041848c1a1b1ea90072ed504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:17 [async_llm.py:261] Added request cmpl-53174e2a041848c1a1b1ea90072ed504-0.
INFO 03-02 00:39:18 [logger.py:42] Received request cmpl-2884c7d8c41b4a7ca15730a7f82f3ffd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:18 [async_llm.py:261] Added request cmpl-2884c7d8c41b4a7ca15730a7f82f3ffd-0.
INFO 03-02 00:39:19 [logger.py:42] Received request cmpl-509fd69f292e45f9b44ba86bb4b077d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:19 [async_llm.py:261] Added request cmpl-509fd69f292e45f9b44ba86bb4b077d9-0.
INFO 03-02 00:39:20 [logger.py:42] Received request cmpl-bfe0958c00844d33970047f431a19b00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:20 [async_llm.py:261] Added request cmpl-bfe0958c00844d33970047f431a19b00-0.
INFO 03-02 00:39:21 [logger.py:42] Received request cmpl-bcf497e114c34be8b7b242b3fba2f243-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:21 [async_llm.py:261] Added request cmpl-bcf497e114c34be8b7b242b3fba2f243-0.
INFO 03-02 00:39:22 [logger.py:42] Received request cmpl-e4611aba46774a238e1646efea242a42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:22 [async_llm.py:261] Added request cmpl-e4611aba46774a238e1646efea242a42-0.
INFO 03-02 00:39:23 [logger.py:42] Received request cmpl-8d122dd2cc6f4beaba323987a1ec1b0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:23 [async_llm.py:261] Added request cmpl-8d122dd2cc6f4beaba323987a1ec1b0c-0.
INFO 03-02 00:39:24 [logger.py:42] Received request cmpl-17f92a109dbc46c59d4f19b24ac80462-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:24 [async_llm.py:261] Added request cmpl-17f92a109dbc46c59d4f19b24ac80462-0.
INFO 03-02 00:39:25 [logger.py:42] Received request cmpl-5e048f141ce54d12ad071b4b11767d80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:25 [async_llm.py:261] Added request cmpl-5e048f141ce54d12ad071b4b11767d80-0.
INFO 03-02 00:39:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:26 [logger.py:42] Received request cmpl-be73c390b60e4662ac16a489ab9c01ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:26 [async_llm.py:261] Added request cmpl-be73c390b60e4662ac16a489ab9c01ec-0.
INFO 03-02 00:39:27 [logger.py:42] Received request cmpl-8070e0057f1b469aaf73b9718e199958-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:27 [async_llm.py:261] Added request cmpl-8070e0057f1b469aaf73b9718e199958-0.
INFO 03-02 00:39:29 [logger.py:42] Received request cmpl-8d45e1b4eff24cb5995816b7dd89654f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:29 [async_llm.py:261] Added request cmpl-8d45e1b4eff24cb5995816b7dd89654f-0.
INFO 03-02 00:39:30 [logger.py:42] Received request cmpl-bf9a3c0ae3524112897d8c8b29023867-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:30 [async_llm.py:261] Added request cmpl-bf9a3c0ae3524112897d8c8b29023867-0.
INFO 03-02 00:39:31 [logger.py:42] Received request cmpl-da5b8e90ec664f24aebf6f61a9aaec35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:31 [async_llm.py:261] Added request cmpl-da5b8e90ec664f24aebf6f61a9aaec35-0.
INFO 03-02 00:39:32 [logger.py:42] Received request cmpl-652272347619499ca812eaeca9453ce3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:32 [async_llm.py:261] Added request cmpl-652272347619499ca812eaeca9453ce3-0.
INFO 03-02 00:39:33 [logger.py:42] Received request cmpl-dc1e578531934d5ab28dc8e9420a474a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:33 [async_llm.py:261] Added request cmpl-dc1e578531934d5ab28dc8e9420a474a-0.
INFO 03-02 00:39:34 [logger.py:42] Received request cmpl-033033cdd289472eb73d7c360f2458cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:34 [async_llm.py:261] Added request cmpl-033033cdd289472eb73d7c360f2458cf-0.
INFO 03-02 00:39:35 [logger.py:42] Received request cmpl-d384d6b1b5b24c018c94f8f1696bde84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:35 [async_llm.py:261] Added request cmpl-d384d6b1b5b24c018c94f8f1696bde84-0.
INFO 03-02 00:39:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:36 [logger.py:42] Received request cmpl-42a000f606e641cba8ab3e1fd3ae0b21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:36 [async_llm.py:261] Added request cmpl-42a000f606e641cba8ab3e1fd3ae0b21-0.
INFO 03-02 00:39:37 [logger.py:42] Received request cmpl-5b0e30175e3149fbad5f7e0dde9f765f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:37 [async_llm.py:261] Added request cmpl-5b0e30175e3149fbad5f7e0dde9f765f-0.
INFO 03-02 00:39:38 [logger.py:42] Received request cmpl-8f1acde91cf14e4b8c42a4065e5387db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:38 [async_llm.py:261] Added request cmpl-8f1acde91cf14e4b8c42a4065e5387db-0.
INFO 03-02 00:39:40 [logger.py:42] Received request cmpl-1e68cccf077445b9b898327ce377acd9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:40 [async_llm.py:261] Added request cmpl-1e68cccf077445b9b898327ce377acd9-0.
INFO 03-02 00:39:41 [logger.py:42] Received request cmpl-487cb449de934523abca764534493081-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:41 [async_llm.py:261] Added request cmpl-487cb449de934523abca764534493081-0.
INFO 03-02 00:39:42 [logger.py:42] Received request cmpl-dd583b8aecfa4f06b5b749cd70ca3881-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:42 [async_llm.py:261] Added request cmpl-dd583b8aecfa4f06b5b749cd70ca3881-0.
INFO 03-02 00:39:43 [logger.py:42] Received request cmpl-d85571a8dfb84a06892406beb2fc363e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:43 [async_llm.py:261] Added request cmpl-d85571a8dfb84a06892406beb2fc363e-0.
INFO 03-02 00:39:44 [logger.py:42] Received request cmpl-a0c75ca542e1471e97d414984b197cd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:44 [async_llm.py:261] Added request cmpl-a0c75ca542e1471e97d414984b197cd3-0.
INFO 03-02 00:39:45 [logger.py:42] Received request cmpl-048565b0598446e8a15bb56c839b773d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:45 [async_llm.py:261] Added request cmpl-048565b0598446e8a15bb56c839b773d-0.
INFO 03-02 00:39:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:46 [logger.py:42] Received request cmpl-c0ea931ec59a47dd89d2aac7f4406753-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:46 [async_llm.py:261] Added request cmpl-c0ea931ec59a47dd89d2aac7f4406753-0.
INFO 03-02 00:39:47 [logger.py:42] Received request cmpl-08e63d69a63441288a91f00d705c38bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:47 [async_llm.py:261] Added request cmpl-08e63d69a63441288a91f00d705c38bf-0.
INFO 03-02 00:39:48 [logger.py:42] Received request cmpl-c06f4ececfe144b99fb596b5430ba735-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:48 [async_llm.py:261] Added request cmpl-c06f4ececfe144b99fb596b5430ba735-0.
INFO 03-02 00:39:49 [logger.py:42] Received request cmpl-9dddcff5f65640bf8e94b44096221ce6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:49 [async_llm.py:261] Added request cmpl-9dddcff5f65640bf8e94b44096221ce6-0.
INFO 03-02 00:39:50 [logger.py:42] Received request cmpl-53f26dac230e423284905b1f9723981e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:50 [async_llm.py:261] Added request cmpl-53f26dac230e423284905b1f9723981e-0.
INFO 03-02 00:39:52 [logger.py:42] Received request cmpl-9a1caaa06ac7488892cb86fc31a7fcff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:52 [async_llm.py:261] Added request cmpl-9a1caaa06ac7488892cb86fc31a7fcff-0.
INFO 03-02 00:39:53 [logger.py:42] Received request cmpl-f0a49f0a64eb46288e7e4fe79dedf076-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:53 [async_llm.py:261] Added request cmpl-f0a49f0a64eb46288e7e4fe79dedf076-0.
INFO 03-02 00:39:54 [logger.py:42] Received request cmpl-d1cc7fbde0a44252862acb2844d76183-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:54 [async_llm.py:261] Added request cmpl-d1cc7fbde0a44252862acb2844d76183-0.
INFO 03-02 00:39:55 [logger.py:42] Received request cmpl-02bcb5129f934262add7f720f789ef44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:55 [async_llm.py:261] Added request cmpl-02bcb5129f934262add7f720f789ef44-0.
INFO 03-02 00:39:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:39:56 [logger.py:42] Received request cmpl-97504324969d4955b4ee7d0f40edf41f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:56 [async_llm.py:261] Added request cmpl-97504324969d4955b4ee7d0f40edf41f-0.
INFO 03-02 00:39:57 [logger.py:42] Received request cmpl-f7e11747bf8d49ce9bf3f74d2b178290-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:57 [async_llm.py:261] Added request cmpl-f7e11747bf8d49ce9bf3f74d2b178290-0.
INFO 03-02 00:39:58 [logger.py:42] Received request cmpl-6b41d9ebd2f0454ca2dfc4e9a2edba44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:58 [async_llm.py:261] Added request cmpl-6b41d9ebd2f0454ca2dfc4e9a2edba44-0.
INFO 03-02 00:39:59 [logger.py:42] Received request cmpl-abd05c9a4f5b458797b8806cccf50b39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:39:59 [async_llm.py:261] Added request cmpl-abd05c9a4f5b458797b8806cccf50b39-0.
INFO 03-02 00:40:00 [logger.py:42] Received request cmpl-e2d75304ba574799981ced79677e664b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:00 [async_llm.py:261] Added request cmpl-e2d75304ba574799981ced79677e664b-0.
INFO 03-02 00:40:01 [logger.py:42] Received request cmpl-815d7070e2524805ab7bd07a0b27ee53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:01 [async_llm.py:261] Added request cmpl-815d7070e2524805ab7bd07a0b27ee53-0.
INFO 03-02 00:40:03 [logger.py:42] Received request cmpl-8447223129834756b1ae0b45d25c4916-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:03 [async_llm.py:261] Added request cmpl-8447223129834756b1ae0b45d25c4916-0.
INFO 03-02 00:40:04 [logger.py:42] Received request cmpl-a9cf686bd8d043ec9b0b16c51e780b67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:04 [async_llm.py:261] Added request cmpl-a9cf686bd8d043ec9b0b16c51e780b67-0.
INFO 03-02 00:40:05 [logger.py:42] Received request cmpl-f91e12f96f5747e197dfbf5f37a49b3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:05 [async_llm.py:261] Added request cmpl-f91e12f96f5747e197dfbf5f37a49b3b-0.
INFO 03-02 00:40:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:06 [logger.py:42] Received request cmpl-55f478bc7bd341839650a8d0ce7e1bef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:06 [async_llm.py:261] Added request cmpl-55f478bc7bd341839650a8d0ce7e1bef-0.
INFO 03-02 00:40:07 [logger.py:42] Received request cmpl-ed2b352b345b4683abcc8504ff738136-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:07 [async_llm.py:261] Added request cmpl-ed2b352b345b4683abcc8504ff738136-0.
INFO 03-02 00:40:08 [logger.py:42] Received request cmpl-97272437ff674544a75e4d60e45a3eed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:08 [async_llm.py:261] Added request cmpl-97272437ff674544a75e4d60e45a3eed-0.
INFO 03-02 00:40:09 [logger.py:42] Received request cmpl-3e6525d69f8444ae8f8c3559a4fb465a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:09 [async_llm.py:261] Added request cmpl-3e6525d69f8444ae8f8c3559a4fb465a-0.
INFO 03-02 00:40:10 [logger.py:42] Received request cmpl-80df0c7085ab4bc69b1ed93d7a269e11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:10 [async_llm.py:261] Added request cmpl-80df0c7085ab4bc69b1ed93d7a269e11-0.
INFO 03-02 00:40:11 [logger.py:42] Received request cmpl-8deb130d51f84833a8af36ff24032aa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:11 [async_llm.py:261] Added request cmpl-8deb130d51f84833a8af36ff24032aa0-0.
INFO 03-02 00:40:12 [logger.py:42] Received request cmpl-6a786e69200e449a8d7727bf6b8ba3ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:12 [async_llm.py:261] Added request cmpl-6a786e69200e449a8d7727bf6b8ba3ce-0.
INFO 03-02 00:40:13 [logger.py:42] Received request cmpl-48681ea07f4d415c96174e4ef518bb6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:13 [async_llm.py:261] Added request cmpl-48681ea07f4d415c96174e4ef518bb6d-0.
INFO 03-02 00:40:15 [logger.py:42] Received request cmpl-2f044c5c7a9147578a9817cc828ccb1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:15 [async_llm.py:261] Added request cmpl-2f044c5c7a9147578a9817cc828ccb1b-0.
INFO 03-02 00:40:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:16 [logger.py:42] Received request cmpl-f8268814a244426cb76af18d93e9e386-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:16 [async_llm.py:261] Added request cmpl-f8268814a244426cb76af18d93e9e386-0.
INFO 03-02 00:40:17 [logger.py:42] Received request cmpl-3e114f5f8d38469cb672b95fb98467f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:17 [async_llm.py:261] Added request cmpl-3e114f5f8d38469cb672b95fb98467f9-0.
INFO 03-02 00:40:18 [logger.py:42] Received request cmpl-04e8dc2a21594676b3fd00b754b5b70c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:18 [async_llm.py:261] Added request cmpl-04e8dc2a21594676b3fd00b754b5b70c-0.
INFO 03-02 00:40:19 [logger.py:42] Received request cmpl-8bb6dc9f6ef748dbb0aeac28b35e5939-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:19 [async_llm.py:261] Added request cmpl-8bb6dc9f6ef748dbb0aeac28b35e5939-0.
INFO 03-02 00:40:20 [logger.py:42] Received request cmpl-75eaecc2af2947558a0665172e5a5d76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:20 [async_llm.py:261] Added request cmpl-75eaecc2af2947558a0665172e5a5d76-0.
INFO 03-02 00:40:21 [logger.py:42] Received request cmpl-39bb0bbfbb484832ab1af46c59c23bf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:21 [async_llm.py:261] Added request cmpl-39bb0bbfbb484832ab1af46c59c23bf6-0.
INFO 03-02 00:40:22 [logger.py:42] Received request cmpl-d174c4e5f6604306ac08905c8b857077-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:22 [async_llm.py:261] Added request cmpl-d174c4e5f6604306ac08905c8b857077-0.
INFO 03-02 00:40:23 [logger.py:42] Received request cmpl-a22e4072a9c84e9badbd9ed647194672-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:23 [async_llm.py:261] Added request cmpl-a22e4072a9c84e9badbd9ed647194672-0.
INFO 03-02 00:40:24 [logger.py:42] Received request cmpl-b536b624a94d4253885c91ba69f559dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:24 [async_llm.py:261] Added request cmpl-b536b624a94d4253885c91ba69f559dc-0.
INFO 03-02 00:40:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:26 [logger.py:42] Received request cmpl-53ae7e58a5e248ffb0fdfbb6236510db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:26 [async_llm.py:261] Added request cmpl-53ae7e58a5e248ffb0fdfbb6236510db-0.
INFO 03-02 00:40:27 [logger.py:42] Received request cmpl-f89dde4a5f4b453b959e41e91ea2a99c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:27 [async_llm.py:261] Added request cmpl-f89dde4a5f4b453b959e41e91ea2a99c-0.
INFO 03-02 00:40:28 [logger.py:42] Received request cmpl-c810a82a987a439f8fbf5589a46dad8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:28 [async_llm.py:261] Added request cmpl-c810a82a987a439f8fbf5589a46dad8d-0.
INFO 03-02 00:40:29 [logger.py:42] Received request cmpl-e696678ace704a97b6a2001935eee710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:29 [async_llm.py:261] Added request cmpl-e696678ace704a97b6a2001935eee710-0.
INFO 03-02 00:40:30 [logger.py:42] Received request cmpl-df885eea90334072ad669759e87bfb78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:30 [async_llm.py:261] Added request cmpl-df885eea90334072ad669759e87bfb78-0.
INFO 03-02 00:40:31 [logger.py:42] Received request cmpl-0174db6f8687474980d003590f2481c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:31 [async_llm.py:261] Added request cmpl-0174db6f8687474980d003590f2481c5-0.
INFO 03-02 00:40:32 [logger.py:42] Received request cmpl-14991f496d734c76a443277db7e38cc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:32 [async_llm.py:261] Added request cmpl-14991f496d734c76a443277db7e38cc4-0.
INFO 03-02 00:40:33 [logger.py:42] Received request cmpl-c47766a903334363b29885e19d46d5c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:33 [async_llm.py:261] Added request cmpl-c47766a903334363b29885e19d46d5c2-0.
INFO 03-02 00:40:34 [logger.py:42] Received request cmpl-f344f35a03a749a891a5bdfb7cbf6752-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:34 [async_llm.py:261] Added request cmpl-f344f35a03a749a891a5bdfb7cbf6752-0.
INFO 03-02 00:40:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:35 [logger.py:42] Received request cmpl-94fda071865a47158e32b63019ffc32f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:35 [async_llm.py:261] Added request cmpl-94fda071865a47158e32b63019ffc32f-0.
INFO 03-02 00:40:36 [logger.py:42] Received request cmpl-5c10697015034a35b38acbb9a7df5c9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:36 [async_llm.py:261] Added request cmpl-5c10697015034a35b38acbb9a7df5c9e-0.
INFO 03-02 00:40:38 [logger.py:42] Received request cmpl-66ab8655b0b04ca5bb2a249758dbc476-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:38 [async_llm.py:261] Added request cmpl-66ab8655b0b04ca5bb2a249758dbc476-0.
INFO 03-02 00:40:39 [logger.py:42] Received request cmpl-7be887f0a1ef40b0865d769d3491bb44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:39 [async_llm.py:261] Added request cmpl-7be887f0a1ef40b0865d769d3491bb44-0.
INFO 03-02 00:40:40 [logger.py:42] Received request cmpl-6e6a5c58594c4a01ad3489a2300802c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:40 [async_llm.py:261] Added request cmpl-6e6a5c58594c4a01ad3489a2300802c6-0.
INFO 03-02 00:40:41 [logger.py:42] Received request cmpl-2c765b81a8bb4136ae5bb8a9f6e51034-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:41 [async_llm.py:261] Added request cmpl-2c765b81a8bb4136ae5bb8a9f6e51034-0.
INFO 03-02 00:40:42 [logger.py:42] Received request cmpl-82122b031eb34a6fa5d368c8d7ad15c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:42 [async_llm.py:261] Added request cmpl-82122b031eb34a6fa5d368c8d7ad15c7-0.
INFO 03-02 00:40:43 [logger.py:42] Received request cmpl-0f998c1428b24041b46fa00c1e517ffe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:43 [async_llm.py:261] Added request cmpl-0f998c1428b24041b46fa00c1e517ffe-0.
INFO 03-02 00:40:44 [logger.py:42] Received request cmpl-4d834d3ef2804f629fd15b1fc31fe25c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:44 [async_llm.py:261] Added request cmpl-4d834d3ef2804f629fd15b1fc31fe25c-0.
INFO 03-02 00:40:45 [logger.py:42] Received request cmpl-c1d70f831e204ff2b574d71c52628f81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:45 [async_llm.py:261] Added request cmpl-c1d70f831e204ff2b574d71c52628f81-0.
INFO 03-02 00:40:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:46 [logger.py:42] Received request cmpl-5b7e13780fc44fa3b56ca2b59d9f8a29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:46 [async_llm.py:261] Added request cmpl-5b7e13780fc44fa3b56ca2b59d9f8a29-0.
INFO 03-02 00:40:47 [logger.py:42] Received request cmpl-04835bdd0c5648d88038de29e0a6aea1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:47 [async_llm.py:261] Added request cmpl-04835bdd0c5648d88038de29e0a6aea1-0.
INFO 03-02 00:40:49 [logger.py:42] Received request cmpl-b99980b6fd6d4461bf3415388ea99e8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:49 [async_llm.py:261] Added request cmpl-b99980b6fd6d4461bf3415388ea99e8e-0.
INFO 03-02 00:40:50 [logger.py:42] Received request cmpl-8292f362171f4945a5e39c48a0df6781-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:50 [async_llm.py:261] Added request cmpl-8292f362171f4945a5e39c48a0df6781-0.
INFO 03-02 00:40:51 [logger.py:42] Received request cmpl-52c3379020a74bd5a006223aa98fe500-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:51 [async_llm.py:261] Added request cmpl-52c3379020a74bd5a006223aa98fe500-0.
INFO 03-02 00:40:52 [logger.py:42] Received request cmpl-11a1c78bbb324ec0aa06832adaec52b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:52 [async_llm.py:261] Added request cmpl-11a1c78bbb324ec0aa06832adaec52b3-0.
INFO 03-02 00:40:53 [logger.py:42] Received request cmpl-e80ad3ec35ab4ec88d0d16172e619a05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:53 [async_llm.py:261] Added request cmpl-e80ad3ec35ab4ec88d0d16172e619a05-0.
INFO 03-02 00:40:54 [logger.py:42] Received request cmpl-2c5ac0b11b354e51a5ec022931caaaca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:54 [async_llm.py:261] Added request cmpl-2c5ac0b11b354e51a5ec022931caaaca-0.
INFO 03-02 00:40:55 [logger.py:42] Received request cmpl-3e2f8a28b588403e9e7d40b1c6184e87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:55 [async_llm.py:261] Added request cmpl-3e2f8a28b588403e9e7d40b1c6184e87-0.
INFO 03-02 00:40:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:40:56 [logger.py:42] Received request cmpl-5b1412c2142e4de4bde5e822119504f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:56 [async_llm.py:261] Added request cmpl-5b1412c2142e4de4bde5e822119504f5-0.
INFO 03-02 00:40:57 [logger.py:42] Received request cmpl-efc04f8bc53b4d97ae22e7b52442f7f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:57 [async_llm.py:261] Added request cmpl-efc04f8bc53b4d97ae22e7b52442f7f5-0.
INFO 03-02 00:40:58 [logger.py:42] Received request cmpl-4aed4371b7f2403ab721832c37f745ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:58 [async_llm.py:261] Added request cmpl-4aed4371b7f2403ab721832c37f745ee-0.
INFO 03-02 00:40:59 [logger.py:42] Received request cmpl-291c605493f7434cb7cc204569789968-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:40:59 [async_llm.py:261] Added request cmpl-291c605493f7434cb7cc204569789968-0.
INFO 03-02 00:41:01 [logger.py:42] Received request cmpl-8e8dc5e25067459899bf528eb6d639da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:01 [async_llm.py:261] Added request cmpl-8e8dc5e25067459899bf528eb6d639da-0.
INFO 03-02 00:41:02 [logger.py:42] Received request cmpl-245892bed9bd44b19bd64cc490e9b503-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:02 [async_llm.py:261] Added request cmpl-245892bed9bd44b19bd64cc490e9b503-0.
INFO 03-02 00:41:03 [logger.py:42] Received request cmpl-674253f5512c4ff2a27da22c6090b2a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:03 [async_llm.py:261] Added request cmpl-674253f5512c4ff2a27da22c6090b2a2-0.
INFO 03-02 00:41:04 [logger.py:42] Received request cmpl-e1c51ec1e72b4cedab3e4278a72a0fa5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:04 [async_llm.py:261] Added request cmpl-e1c51ec1e72b4cedab3e4278a72a0fa5-0.
INFO 03-02 00:41:05 [logger.py:42] Received request cmpl-79d765a4ac664cb18879a1fbaaff7ff1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:05 [async_llm.py:261] Added request cmpl-79d765a4ac664cb18879a1fbaaff7ff1-0.
INFO 03-02 00:41:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:06 [logger.py:42] Received request cmpl-a1e58929d0fa4547ad96f484128370b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:06 [async_llm.py:261] Added request cmpl-a1e58929d0fa4547ad96f484128370b4-0.
INFO 03-02 00:41:07 [logger.py:42] Received request cmpl-8686e4a4b07042db92e0586a2b8d2287-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:07 [async_llm.py:261] Added request cmpl-8686e4a4b07042db92e0586a2b8d2287-0.
INFO 03-02 00:41:08 [logger.py:42] Received request cmpl-e7de034378d64b2ba3adbcd87dbcde0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:08 [async_llm.py:261] Added request cmpl-e7de034378d64b2ba3adbcd87dbcde0e-0.
INFO 03-02 00:41:09 [logger.py:42] Received request cmpl-705369e048a4419386cda96b7da0bee6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:09 [async_llm.py:261] Added request cmpl-705369e048a4419386cda96b7da0bee6-0.
INFO 03-02 00:41:10 [logger.py:42] Received request cmpl-17c07e6847df470788f99e57d5be50b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:10 [async_llm.py:261] Added request cmpl-17c07e6847df470788f99e57d5be50b5-0.
INFO 03-02 00:41:12 [logger.py:42] Received request cmpl-3e6d578975564a8681b4459fb48fa1f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:12 [async_llm.py:261] Added request cmpl-3e6d578975564a8681b4459fb48fa1f9-0.
INFO 03-02 00:41:13 [logger.py:42] Received request cmpl-c1b88d09fe774301856d70dbc1aa75a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:13 [async_llm.py:261] Added request cmpl-c1b88d09fe774301856d70dbc1aa75a2-0.
INFO 03-02 00:41:14 [logger.py:42] Received request cmpl-605b9a5f10d144ceaa99777b2df2a084-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:14 [async_llm.py:261] Added request cmpl-605b9a5f10d144ceaa99777b2df2a084-0.
INFO 03-02 00:41:15 [logger.py:42] Received request cmpl-b498e61566af4a4eb65b1eccf6c97523-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:15 [async_llm.py:261] Added request cmpl-b498e61566af4a4eb65b1eccf6c97523-0.
INFO 03-02 00:41:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:16 [logger.py:42] Received request cmpl-b8bc5f995c7b4a17876cfdde597d41a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:16 [async_llm.py:261] Added request cmpl-b8bc5f995c7b4a17876cfdde597d41a0-0.
INFO 03-02 00:41:17 [logger.py:42] Received request cmpl-64e27504591d40ba9189462dcd1d74b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:17 [async_llm.py:261] Added request cmpl-64e27504591d40ba9189462dcd1d74b1-0.
INFO 03-02 00:41:18 [logger.py:42] Received request cmpl-41ee7d8679d7428c8ed03bcaa40b8f9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:18 [async_llm.py:261] Added request cmpl-41ee7d8679d7428c8ed03bcaa40b8f9a-0.
INFO 03-02 00:41:19 [logger.py:42] Received request cmpl-01a9e98db9bb4ccb9d586471507d73e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:19 [async_llm.py:261] Added request cmpl-01a9e98db9bb4ccb9d586471507d73e0-0.
INFO 03-02 00:41:20 [logger.py:42] Received request cmpl-1a02c198c1ef47be98ee63ae91540d59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:20 [async_llm.py:261] Added request cmpl-1a02c198c1ef47be98ee63ae91540d59-0.
INFO 03-02 00:41:21 [logger.py:42] Received request cmpl-82b65477230a4239bc744d1f73c2304d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:21 [async_llm.py:261] Added request cmpl-82b65477230a4239bc744d1f73c2304d-0.
INFO 03-02 00:41:22 [logger.py:42] Received request cmpl-257a9ff0ec6e4e85816a5b23488ef131-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:22 [async_llm.py:261] Added request cmpl-257a9ff0ec6e4e85816a5b23488ef131-0.
INFO 03-02 00:41:24 [logger.py:42] Received request cmpl-e89f221a890f4710996942f9fd62829d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:24 [async_llm.py:261] Added request cmpl-e89f221a890f4710996942f9fd62829d-0.
INFO 03-02 00:41:25 [logger.py:42] Received request cmpl-4daad73f95d6467f964a4195e70129b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:25 [async_llm.py:261] Added request cmpl-4daad73f95d6467f964a4195e70129b1-0.
INFO 03-02 00:41:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:26 [logger.py:42] Received request cmpl-e4811897814e45eb88722ea53fdec1db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:26 [async_llm.py:261] Added request cmpl-e4811897814e45eb88722ea53fdec1db-0.
INFO 03-02 00:41:27 [logger.py:42] Received request cmpl-04b033d36dbe45699929256a25cb1101-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:27 [async_llm.py:261] Added request cmpl-04b033d36dbe45699929256a25cb1101-0.
INFO 03-02 00:41:28 [logger.py:42] Received request cmpl-dca699f1ba4643d980d62e8864e2bf11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:28 [async_llm.py:261] Added request cmpl-dca699f1ba4643d980d62e8864e2bf11-0.
INFO 03-02 00:41:29 [logger.py:42] Received request cmpl-0bdbbf6677b04f23bc2578317738da4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:29 [async_llm.py:261] Added request cmpl-0bdbbf6677b04f23bc2578317738da4f-0.
INFO 03-02 00:41:30 [logger.py:42] Received request cmpl-c1088a2d11e840a286c477be9398c40e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:30 [async_llm.py:261] Added request cmpl-c1088a2d11e840a286c477be9398c40e-0.
INFO 03-02 00:41:31 [logger.py:42] Received request cmpl-d7eb3dd45cec46dabdfe5b548c95d224-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:31 [async_llm.py:261] Added request cmpl-d7eb3dd45cec46dabdfe5b548c95d224-0.
INFO 03-02 00:41:32 [logger.py:42] Received request cmpl-87cb666cdcb7431f9f3dce43597ba43b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:32 [async_llm.py:261] Added request cmpl-87cb666cdcb7431f9f3dce43597ba43b-0.
INFO 03-02 00:41:33 [logger.py:42] Received request cmpl-d2d5ee932cea4a23b37638e23c89502e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:33 [async_llm.py:261] Added request cmpl-d2d5ee932cea4a23b37638e23c89502e-0.
INFO 03-02 00:41:35 [logger.py:42] Received request cmpl-8f8449bbf3214b1b9f31d0d5b82401f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:35 [async_llm.py:261] Added request cmpl-8f8449bbf3214b1b9f31d0d5b82401f9-0.
INFO 03-02 00:41:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:36 [logger.py:42] Received request cmpl-ccbb82d264b545f69708345fd1ff6969-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:36 [async_llm.py:261] Added request cmpl-ccbb82d264b545f69708345fd1ff6969-0.
INFO 03-02 00:41:37 [logger.py:42] Received request cmpl-559f67e90d3240c8aacfcc30498c933d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:37 [async_llm.py:261] Added request cmpl-559f67e90d3240c8aacfcc30498c933d-0.
INFO 03-02 00:41:38 [logger.py:42] Received request cmpl-5d9276f34f984f8ebf45d1f8f81b7e3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:38 [async_llm.py:261] Added request cmpl-5d9276f34f984f8ebf45d1f8f81b7e3e-0.
INFO 03-02 00:41:39 [logger.py:42] Received request cmpl-ca355d8d0d6948c08ae7d534ec24223b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:39 [async_llm.py:261] Added request cmpl-ca355d8d0d6948c08ae7d534ec24223b-0.
INFO 03-02 00:41:40 [logger.py:42] Received request cmpl-0d655c5c46674d32bd3ece78e639ba9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:40 [async_llm.py:261] Added request cmpl-0d655c5c46674d32bd3ece78e639ba9c-0.
INFO 03-02 00:41:41 [logger.py:42] Received request cmpl-e747ca4cabd94b24a3b7a2f9886b1c33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:41 [async_llm.py:261] Added request cmpl-e747ca4cabd94b24a3b7a2f9886b1c33-0.
INFO 03-02 00:41:42 [logger.py:42] Received request cmpl-4cb61e054e594897aaeabad43cf096e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:42 [async_llm.py:261] Added request cmpl-4cb61e054e594897aaeabad43cf096e1-0.
INFO 03-02 00:41:43 [logger.py:42] Received request cmpl-719ab20768d34624a9655dea8cce5ab7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:43 [async_llm.py:261] Added request cmpl-719ab20768d34624a9655dea8cce5ab7-0.
INFO 03-02 00:41:44 [logger.py:42] Received request cmpl-5185ae615a2d41cc9e1c038e0bdc00e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:44 [async_llm.py:261] Added request cmpl-5185ae615a2d41cc9e1c038e0bdc00e5-0.
INFO 03-02 00:41:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:46 [logger.py:42] Received request cmpl-a8ac1c0f1f9547c7b685129ceb523332-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:46 [async_llm.py:261] Added request cmpl-a8ac1c0f1f9547c7b685129ceb523332-0.
INFO 03-02 00:41:47 [logger.py:42] Received request cmpl-68738e272b4c4c36af0da6f191111973-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:47 [async_llm.py:261] Added request cmpl-68738e272b4c4c36af0da6f191111973-0.
INFO 03-02 00:41:48 [logger.py:42] Received request cmpl-75e7218815304e7d9e0a4f1f812f7ca2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:48 [async_llm.py:261] Added request cmpl-75e7218815304e7d9e0a4f1f812f7ca2-0.
INFO 03-02 00:41:49 [logger.py:42] Received request cmpl-346244f613544cd4ab3285b2249f48eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:49 [async_llm.py:261] Added request cmpl-346244f613544cd4ab3285b2249f48eb-0.
INFO 03-02 00:41:50 [logger.py:42] Received request cmpl-aff9af6e420c48b1925a3baa58e4e5d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:50 [async_llm.py:261] Added request cmpl-aff9af6e420c48b1925a3baa58e4e5d2-0.
INFO 03-02 00:41:51 [logger.py:42] Received request cmpl-014bf22a41d64ba6b8ae8bba276eb710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:51 [async_llm.py:261] Added request cmpl-014bf22a41d64ba6b8ae8bba276eb710-0.
INFO 03-02 00:41:52 [logger.py:42] Received request cmpl-cc3a806d97374aa6889f17dff24ab4d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:52 [async_llm.py:261] Added request cmpl-cc3a806d97374aa6889f17dff24ab4d4-0.
INFO 03-02 00:41:53 [logger.py:42] Received request cmpl-82efe1d9796d441cb540b3b696804e0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:53 [async_llm.py:261] Added request cmpl-82efe1d9796d441cb540b3b696804e0b-0.
INFO 03-02 00:41:54 [logger.py:42] Received request cmpl-ec6f556662394154ab21dee928f60083-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:54 [async_llm.py:261] Added request cmpl-ec6f556662394154ab21dee928f60083-0.
INFO 03-02 00:41:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:41:55 [logger.py:42] Received request cmpl-4b9c9927a6be4b7fb02bcf4929e58931-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:55 [async_llm.py:261] Added request cmpl-4b9c9927a6be4b7fb02bcf4929e58931-0.
INFO 03-02 00:41:56 [logger.py:42] Received request cmpl-093f39dbe5a742bdaeec6d27d2537926-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:56 [async_llm.py:261] Added request cmpl-093f39dbe5a742bdaeec6d27d2537926-0.
INFO 03-02 00:41:58 [logger.py:42] Received request cmpl-612ef5d89d57430abcafcb84b4357f90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:58 [async_llm.py:261] Added request cmpl-612ef5d89d57430abcafcb84b4357f90-0.
INFO 03-02 00:41:59 [logger.py:42] Received request cmpl-85019c807f9d4c7fbba39dcc5bc43a29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:41:59 [async_llm.py:261] Added request cmpl-85019c807f9d4c7fbba39dcc5bc43a29-0.
INFO 03-02 00:42:00 [logger.py:42] Received request cmpl-36180143c9c34c1d9004afcc67036396-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:00 [async_llm.py:261] Added request cmpl-36180143c9c34c1d9004afcc67036396-0.
INFO 03-02 00:42:01 [logger.py:42] Received request cmpl-8aff69c0f4084b1088305d36b392fd6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:01 [async_llm.py:261] Added request cmpl-8aff69c0f4084b1088305d36b392fd6f-0.
INFO 03-02 00:42:02 [logger.py:42] Received request cmpl-3b814b8952364f2aafbb3c1bb6d8ad4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:02 [async_llm.py:261] Added request cmpl-3b814b8952364f2aafbb3c1bb6d8ad4d-0.
INFO 03-02 00:42:03 [logger.py:42] Received request cmpl-f4f12a8355994c37886a1f0a193fbadf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:03 [async_llm.py:261] Added request cmpl-f4f12a8355994c37886a1f0a193fbadf-0.
INFO 03-02 00:42:04 [logger.py:42] Received request cmpl-f82def731fdb4ad6a93c618c47363c01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:04 [async_llm.py:261] Added request cmpl-f82def731fdb4ad6a93c618c47363c01-0.
INFO 03-02 00:42:05 [logger.py:42] Received request cmpl-26ee2b719ecc436f901261109dbf0377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:05 [async_llm.py:261] Added request cmpl-26ee2b719ecc436f901261109dbf0377-0.
INFO 03-02 00:42:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:06 [logger.py:42] Received request cmpl-027aec067a44431cb81ad1c3eab49b1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:06 [async_llm.py:261] Added request cmpl-027aec067a44431cb81ad1c3eab49b1f-0.
INFO 03-02 00:42:07 [logger.py:42] Received request cmpl-f08869429601498d8df8d30340d0e1bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:07 [async_llm.py:261] Added request cmpl-f08869429601498d8df8d30340d0e1bd-0.
INFO 03-02 00:42:09 [logger.py:42] Received request cmpl-58afcd573a5e4160b31a3098cfa6e18b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:09 [async_llm.py:261] Added request cmpl-58afcd573a5e4160b31a3098cfa6e18b-0.
INFO 03-02 00:42:10 [logger.py:42] Received request cmpl-22ee1b0d88f24a5f97f903680a0864c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:10 [async_llm.py:261] Added request cmpl-22ee1b0d88f24a5f97f903680a0864c9-0.
INFO 03-02 00:42:11 [logger.py:42] Received request cmpl-bcc17105720a4b62b433a3ee0da4e281-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:11 [async_llm.py:261] Added request cmpl-bcc17105720a4b62b433a3ee0da4e281-0.
INFO 03-02 00:42:12 [logger.py:42] Received request cmpl-db0f450eece74d2bbdfa5160f7e3d86b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:12 [async_llm.py:261] Added request cmpl-db0f450eece74d2bbdfa5160f7e3d86b-0.
INFO 03-02 00:42:13 [logger.py:42] Received request cmpl-307955a6aa1a40bba62c8bb0cc19035e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:13 [async_llm.py:261] Added request cmpl-307955a6aa1a40bba62c8bb0cc19035e-0.
INFO 03-02 00:42:14 [logger.py:42] Received request cmpl-ad8cc59dc4ca46e2856b412536c47eaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:14 [async_llm.py:261] Added request cmpl-ad8cc59dc4ca46e2856b412536c47eaf-0.
INFO 03-02 00:42:15 [logger.py:42] Received request cmpl-1f93c4b920ee4991a1edd142c03b1d4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:15 [async_llm.py:261] Added request cmpl-1f93c4b920ee4991a1edd142c03b1d4c-0.
INFO 03-02 00:42:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:16 [logger.py:42] Received request cmpl-68320b5ff58e41dc8e83c7154671e4c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:16 [async_llm.py:261] Added request cmpl-68320b5ff58e41dc8e83c7154671e4c5-0.
INFO 03-02 00:42:17 [logger.py:42] Received request cmpl-9ba5806994d242ae81295f76991bfa7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:17 [async_llm.py:261] Added request cmpl-9ba5806994d242ae81295f76991bfa7d-0.
INFO 03-02 00:42:18 [logger.py:42] Received request cmpl-eaacfffac6d14f048d65f306a834c6d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:18 [async_llm.py:261] Added request cmpl-eaacfffac6d14f048d65f306a834c6d5-0.
INFO 03-02 00:42:20 [logger.py:42] Received request cmpl-2219857a6e994772acfba3a6f15ca6c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:20 [async_llm.py:261] Added request cmpl-2219857a6e994772acfba3a6f15ca6c8-0.
INFO 03-02 00:42:21 [logger.py:42] Received request cmpl-c209d780d3e04f71955b140b4d47a04f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:21 [async_llm.py:261] Added request cmpl-c209d780d3e04f71955b140b4d47a04f-0.
INFO 03-02 00:42:22 [logger.py:42] Received request cmpl-f7e022e152364b7382a30e31650214a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:22 [async_llm.py:261] Added request cmpl-f7e022e152364b7382a30e31650214a4-0.
INFO 03-02 00:42:23 [logger.py:42] Received request cmpl-9028af08e591483e93655c4273185c26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:23 [async_llm.py:261] Added request cmpl-9028af08e591483e93655c4273185c26-0.
INFO 03-02 00:42:24 [logger.py:42] Received request cmpl-f7865af44d3f49f0b0448ad773b068f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:24 [async_llm.py:261] Added request cmpl-f7865af44d3f49f0b0448ad773b068f1-0.
INFO 03-02 00:42:25 [logger.py:42] Received request cmpl-93a2b84271e04075811cab74bbe02360-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:25 [async_llm.py:261] Added request cmpl-93a2b84271e04075811cab74bbe02360-0.
INFO 03-02 00:42:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:26 [logger.py:42] Received request cmpl-54a42c1e050c4984b14736ae4e6ce42f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:26 [async_llm.py:261] Added request cmpl-54a42c1e050c4984b14736ae4e6ce42f-0.
INFO 03-02 00:42:27 [logger.py:42] Received request cmpl-713dcde45b2b4d39b37519ce923930f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:27 [async_llm.py:261] Added request cmpl-713dcde45b2b4d39b37519ce923930f9-0.
INFO 03-02 00:42:28 [logger.py:42] Received request cmpl-e0b98c8436a94c31bf3fe6e0ad599227-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:28 [async_llm.py:261] Added request cmpl-e0b98c8436a94c31bf3fe6e0ad599227-0.
INFO 03-02 00:42:29 [logger.py:42] Received request cmpl-d90d6ef7362c407bb237ca3a677bd712-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:29 [async_llm.py:261] Added request cmpl-d90d6ef7362c407bb237ca3a677bd712-0.
INFO 03-02 00:42:30 [logger.py:42] Received request cmpl-77c85a850a2b49c3befc442b8c097df3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:30 [async_llm.py:261] Added request cmpl-77c85a850a2b49c3befc442b8c097df3-0.
INFO 03-02 00:42:32 [logger.py:42] Received request cmpl-b6a9ce166dcc432cbe050813b9c48972-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:32 [async_llm.py:261] Added request cmpl-b6a9ce166dcc432cbe050813b9c48972-0.
INFO 03-02 00:42:33 [logger.py:42] Received request cmpl-e5886e8a6c6c4c5ebd9dd263a7145972-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:33 [async_llm.py:261] Added request cmpl-e5886e8a6c6c4c5ebd9dd263a7145972-0.
INFO 03-02 00:42:34 [logger.py:42] Received request cmpl-05b96c72798e4213a880ba46f0acfa15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:34 [async_llm.py:261] Added request cmpl-05b96c72798e4213a880ba46f0acfa15-0.
INFO 03-02 00:42:35 [logger.py:42] Received request cmpl-95ec3918ac9d40d59a85daa6e564fd3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:35 [async_llm.py:261] Added request cmpl-95ec3918ac9d40d59a85daa6e564fd3f-0.
INFO 03-02 00:42:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:36 [logger.py:42] Received request cmpl-3cc1dbfcfd5f415da889730a646a5b94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:36 [async_llm.py:261] Added request cmpl-3cc1dbfcfd5f415da889730a646a5b94-0.
INFO 03-02 00:42:37 [logger.py:42] Received request cmpl-5515ebe7d8564898977b83a1d2c82022-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:37 [async_llm.py:261] Added request cmpl-5515ebe7d8564898977b83a1d2c82022-0.
INFO 03-02 00:42:38 [logger.py:42] Received request cmpl-85a6c141674a47699123d848a9502229-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:38 [async_llm.py:261] Added request cmpl-85a6c141674a47699123d848a9502229-0.
INFO 03-02 00:42:39 [logger.py:42] Received request cmpl-ac9a7e063347426db26598f23df1a04d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:39 [async_llm.py:261] Added request cmpl-ac9a7e063347426db26598f23df1a04d-0.
INFO 03-02 00:42:40 [logger.py:42] Received request cmpl-2d134578e75443679d6f478f74c939be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:40 [async_llm.py:261] Added request cmpl-2d134578e75443679d6f478f74c939be-0.
INFO 03-02 00:42:41 [logger.py:42] Received request cmpl-dc3d4cf9b11b4e5ca73e3f0da012e66c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:41 [async_llm.py:261] Added request cmpl-dc3d4cf9b11b4e5ca73e3f0da012e66c-0.
INFO 03-02 00:42:42 [logger.py:42] Received request cmpl-7a5c27d112c3445b99c290c93d06f883-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:42 [async_llm.py:261] Added request cmpl-7a5c27d112c3445b99c290c93d06f883-0.
INFO 03-02 00:42:44 [logger.py:42] Received request cmpl-09c644a8b7ec43dc991ca4aca91f1a3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:44 [async_llm.py:261] Added request cmpl-09c644a8b7ec43dc991ca4aca91f1a3c-0.
INFO 03-02 00:42:45 [logger.py:42] Received request cmpl-8f225f32acf4476b82c22eef6c150c41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:45 [async_llm.py:261] Added request cmpl-8f225f32acf4476b82c22eef6c150c41-0.
INFO 03-02 00:42:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:46 [logger.py:42] Received request cmpl-06028f096c8b4db592906440e16041c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:46 [async_llm.py:261] Added request cmpl-06028f096c8b4db592906440e16041c9-0.
INFO 03-02 00:42:47 [logger.py:42] Received request cmpl-0f138482a4684dc199259966c3d98bf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:47 [async_llm.py:261] Added request cmpl-0f138482a4684dc199259966c3d98bf3-0.
INFO 03-02 00:42:48 [logger.py:42] Received request cmpl-3a891d7955354ff585c3b29a6ecdd597-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:48 [async_llm.py:261] Added request cmpl-3a891d7955354ff585c3b29a6ecdd597-0.
INFO 03-02 00:42:49 [logger.py:42] Received request cmpl-2e33396950ac46aa9e903a1db90d8988-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:49 [async_llm.py:261] Added request cmpl-2e33396950ac46aa9e903a1db90d8988-0.
INFO 03-02 00:42:50 [logger.py:42] Received request cmpl-83a01a489bb74565b33dbf66c54a1f66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:50 [async_llm.py:261] Added request cmpl-83a01a489bb74565b33dbf66c54a1f66-0.
INFO 03-02 00:42:51 [logger.py:42] Received request cmpl-7c68c841705c473ab8ca319789f15935-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:51 [async_llm.py:261] Added request cmpl-7c68c841705c473ab8ca319789f15935-0.
INFO 03-02 00:42:52 [logger.py:42] Received request cmpl-61787382810c48dca6a7783ec85810bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:52 [async_llm.py:261] Added request cmpl-61787382810c48dca6a7783ec85810bf-0.
INFO 03-02 00:42:53 [logger.py:42] Received request cmpl-9412f57a8e6446a2ad67e1c7c749a5a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:53 [async_llm.py:261] Added request cmpl-9412f57a8e6446a2ad67e1c7c749a5a4-0.
INFO 03-02 00:42:55 [logger.py:42] Received request cmpl-c1dcc35f341544a693a428e85501be90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:55 [async_llm.py:261] Added request cmpl-c1dcc35f341544a693a428e85501be90-0.
INFO 03-02 00:42:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:42:56 [logger.py:42] Received request cmpl-dedb6c702420435d8625e040687ec40f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:56 [async_llm.py:261] Added request cmpl-dedb6c702420435d8625e040687ec40f-0.
INFO 03-02 00:42:57 [logger.py:42] Received request cmpl-acc04ba19bc5425c918103862ff7d61a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:57 [async_llm.py:261] Added request cmpl-acc04ba19bc5425c918103862ff7d61a-0.
INFO 03-02 00:42:58 [logger.py:42] Received request cmpl-5b5ba4d682de499b8673a81e289165d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:58 [async_llm.py:261] Added request cmpl-5b5ba4d682de499b8673a81e289165d9-0.
INFO 03-02 00:42:59 [logger.py:42] Received request cmpl-790b40ac7060450a877bc9c5f3c120d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:42:59 [async_llm.py:261] Added request cmpl-790b40ac7060450a877bc9c5f3c120d6-0.
INFO 03-02 00:43:00 [logger.py:42] Received request cmpl-4e0e309dd7fe4c538c8168145bb5bb0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:00 [async_llm.py:261] Added request cmpl-4e0e309dd7fe4c538c8168145bb5bb0f-0.
INFO 03-02 00:43:01 [logger.py:42] Received request cmpl-d7294046fc894496ab08426b41c7a620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:01 [async_llm.py:261] Added request cmpl-d7294046fc894496ab08426b41c7a620-0.
INFO 03-02 00:43:02 [logger.py:42] Received request cmpl-af86e2b746224bc2aefa52888a999daf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:02 [async_llm.py:261] Added request cmpl-af86e2b746224bc2aefa52888a999daf-0.
INFO 03-02 00:43:03 [logger.py:42] Received request cmpl-609e2fb6eee84dc7b61884cd98c0d87d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:03 [async_llm.py:261] Added request cmpl-609e2fb6eee84dc7b61884cd98c0d87d-0.
INFO 03-02 00:43:04 [logger.py:42] Received request cmpl-5eeb16d136d64146899ca152d43c39db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:04 [async_llm.py:261] Added request cmpl-5eeb16d136d64146899ca152d43c39db-0.
INFO 03-02 00:43:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:05 [logger.py:42] Received request cmpl-f4325063944d4de8abfc88286f648a6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:05 [async_llm.py:261] Added request cmpl-f4325063944d4de8abfc88286f648a6d-0.
INFO 03-02 00:43:07 [logger.py:42] Received request cmpl-7c9aa8fc51fe40b3887c6250c4d39db4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:07 [async_llm.py:261] Added request cmpl-7c9aa8fc51fe40b3887c6250c4d39db4-0.
INFO 03-02 00:43:08 [logger.py:42] Received request cmpl-5fde44228c71496394f80b3fbb3e7404-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:08 [async_llm.py:261] Added request cmpl-5fde44228c71496394f80b3fbb3e7404-0.
INFO 03-02 00:43:09 [logger.py:42] Received request cmpl-1c0b9b08cdfb4f5ead23f39deb4c9800-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:09 [async_llm.py:261] Added request cmpl-1c0b9b08cdfb4f5ead23f39deb4c9800-0.
INFO 03-02 00:43:10 [logger.py:42] Received request cmpl-934477e78a5e409daff64d6108d4de22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:10 [async_llm.py:261] Added request cmpl-934477e78a5e409daff64d6108d4de22-0.
INFO 03-02 00:43:11 [logger.py:42] Received request cmpl-e3b62cca20844ee6aad58df4b1e30de7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:11 [async_llm.py:261] Added request cmpl-e3b62cca20844ee6aad58df4b1e30de7-0.
INFO 03-02 00:43:12 [logger.py:42] Received request cmpl-7c3f4f16bc3d43bebf2b835564377af7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:12 [async_llm.py:261] Added request cmpl-7c3f4f16bc3d43bebf2b835564377af7-0.
INFO 03-02 00:43:13 [logger.py:42] Received request cmpl-9a369342eed94552a5d6695de49a8ed2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:13 [async_llm.py:261] Added request cmpl-9a369342eed94552a5d6695de49a8ed2-0.
INFO 03-02 00:43:14 [logger.py:42] Received request cmpl-276709e11c55425bab7509076043a55a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:14 [async_llm.py:261] Added request cmpl-276709e11c55425bab7509076043a55a-0.
INFO 03-02 00:43:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:15 [logger.py:42] Received request cmpl-a85a42807a064edda0d21bab8e06f136-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:15 [async_llm.py:261] Added request cmpl-a85a42807a064edda0d21bab8e06f136-0.
INFO 03-02 00:43:16 [logger.py:42] Received request cmpl-eb29aef14cfd4e6797b261cd3845d795-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:16 [async_llm.py:261] Added request cmpl-eb29aef14cfd4e6797b261cd3845d795-0.
INFO 03-02 00:43:18 [logger.py:42] Received request cmpl-a1d381618272402b87170cbee3bd807a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:18 [async_llm.py:261] Added request cmpl-a1d381618272402b87170cbee3bd807a-0.
INFO 03-02 00:43:19 [logger.py:42] Received request cmpl-246cc715088f4e329ef5bb6e7eef0e02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:19 [async_llm.py:261] Added request cmpl-246cc715088f4e329ef5bb6e7eef0e02-0.
INFO 03-02 00:43:20 [logger.py:42] Received request cmpl-cc0aef95e74b4afbbdbc9b0db4d829be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:20 [async_llm.py:261] Added request cmpl-cc0aef95e74b4afbbdbc9b0db4d829be-0.
INFO 03-02 00:43:21 [logger.py:42] Received request cmpl-7af5cadab28947ae96471978aaa5f1c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:21 [async_llm.py:261] Added request cmpl-7af5cadab28947ae96471978aaa5f1c7-0.
INFO 03-02 00:43:22 [logger.py:42] Received request cmpl-cee6390faa9f4f67bd7b7fc37c2b56bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:22 [async_llm.py:261] Added request cmpl-cee6390faa9f4f67bd7b7fc37c2b56bf-0.
INFO 03-02 00:43:23 [logger.py:42] Received request cmpl-45a6733ab91a4755957424c35ca10775-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:23 [async_llm.py:261] Added request cmpl-45a6733ab91a4755957424c35ca10775-0.
INFO 03-02 00:43:24 [logger.py:42] Received request cmpl-2d2f53503f914653bf903a962954c104-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:24 [async_llm.py:261] Added request cmpl-2d2f53503f914653bf903a962954c104-0.
INFO 03-02 00:43:25 [logger.py:42] Received request cmpl-270780a0f9c5489d94a9d6eb51d06290-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:25 [async_llm.py:261] Added request cmpl-270780a0f9c5489d94a9d6eb51d06290-0.
INFO 03-02 00:43:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:26 [logger.py:42] Received request cmpl-49376fc305de4962abe88ad84f3b9398-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:26 [async_llm.py:261] Added request cmpl-49376fc305de4962abe88ad84f3b9398-0.
INFO 03-02 00:43:27 [logger.py:42] Received request cmpl-c04f3af10cb2413983b7a6bfc9d2c79b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:27 [async_llm.py:261] Added request cmpl-c04f3af10cb2413983b7a6bfc9d2c79b-0.
INFO 03-02 00:43:28 [logger.py:42] Received request cmpl-a1ee37ca7f2949a98fd5ffec17e138ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:28 [async_llm.py:261] Added request cmpl-a1ee37ca7f2949a98fd5ffec17e138ef-0.
INFO 03-02 00:43:30 [logger.py:42] Received request cmpl-eb910503990a4c3ca78636dcf60a3579-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:30 [async_llm.py:261] Added request cmpl-eb910503990a4c3ca78636dcf60a3579-0.
INFO 03-02 00:43:31 [logger.py:42] Received request cmpl-f04c7c1c682f498f9d36387707205029-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:31 [async_llm.py:261] Added request cmpl-f04c7c1c682f498f9d36387707205029-0.
INFO 03-02 00:43:32 [logger.py:42] Received request cmpl-58c145838cc34236a5bb8dbea9934470-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:32 [async_llm.py:261] Added request cmpl-58c145838cc34236a5bb8dbea9934470-0.
INFO 03-02 00:43:33 [logger.py:42] Received request cmpl-dc8ca96cfdab48e1930a7bb5f1206f64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:33 [async_llm.py:261] Added request cmpl-dc8ca96cfdab48e1930a7bb5f1206f64-0.
INFO 03-02 00:43:34 [logger.py:42] Received request cmpl-6c1e3d49fb7c4fffa2174d09544400f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:34 [async_llm.py:261] Added request cmpl-6c1e3d49fb7c4fffa2174d09544400f2-0.
INFO 03-02 00:43:35 [logger.py:42] Received request cmpl-8d13761de6fe43adae59e2f1c6999509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:35 [async_llm.py:261] Added request cmpl-8d13761de6fe43adae59e2f1c6999509-0.
INFO 03-02 00:43:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:36 [logger.py:42] Received request cmpl-37af1b6438e74218941ac5f5b37acb59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:36 [async_llm.py:261] Added request cmpl-37af1b6438e74218941ac5f5b37acb59-0.
INFO 03-02 00:43:37 [logger.py:42] Received request cmpl-d780b90c495e41a78742c353bd789eba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:37 [async_llm.py:261] Added request cmpl-d780b90c495e41a78742c353bd789eba-0.
INFO 03-02 00:43:38 [logger.py:42] Received request cmpl-c06d6a92cc5e4e468377f2f77edc5077-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:38 [async_llm.py:261] Added request cmpl-c06d6a92cc5e4e468377f2f77edc5077-0.
INFO 03-02 00:43:39 [logger.py:42] Received request cmpl-abbfcb13a0ec455aad8927e40db34b1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:39 [async_llm.py:261] Added request cmpl-abbfcb13a0ec455aad8927e40db34b1e-0.
INFO 03-02 00:43:41 [logger.py:42] Received request cmpl-8868517e3b2d4f5eb8986210c418207a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:41 [async_llm.py:261] Added request cmpl-8868517e3b2d4f5eb8986210c418207a-0.
INFO 03-02 00:43:42 [logger.py:42] Received request cmpl-a19c7edd24a14efd9acb6b6cb74b48b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:42 [async_llm.py:261] Added request cmpl-a19c7edd24a14efd9acb6b6cb74b48b5-0.
INFO 03-02 00:43:43 [logger.py:42] Received request cmpl-7143cb80463c4bed9b5b08be1cc5c011-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:43 [async_llm.py:261] Added request cmpl-7143cb80463c4bed9b5b08be1cc5c011-0.
INFO 03-02 00:43:44 [logger.py:42] Received request cmpl-28827ca761674640acae7c0589562d2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:44 [async_llm.py:261] Added request cmpl-28827ca761674640acae7c0589562d2a-0.
INFO 03-02 00:43:45 [logger.py:42] Received request cmpl-df67e775083048f087c0a860b6cc384e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:45 [async_llm.py:261] Added request cmpl-df67e775083048f087c0a860b6cc384e-0.
INFO 03-02 00:43:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:46 [logger.py:42] Received request cmpl-ea3f2eb0db70455a84a85a6775a7df91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:46 [async_llm.py:261] Added request cmpl-ea3f2eb0db70455a84a85a6775a7df91-0.
INFO 03-02 00:43:47 [logger.py:42] Received request cmpl-1aeaa0af3a2344b2a2deec6e35400b36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:47 [async_llm.py:261] Added request cmpl-1aeaa0af3a2344b2a2deec6e35400b36-0.
INFO 03-02 00:43:48 [logger.py:42] Received request cmpl-a9d3567fc136484eadc2aa14d8d0027c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:48 [async_llm.py:261] Added request cmpl-a9d3567fc136484eadc2aa14d8d0027c-0.
INFO 03-02 00:43:49 [logger.py:42] Received request cmpl-1355e8ded406434da5b0674354768525-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:49 [async_llm.py:261] Added request cmpl-1355e8ded406434da5b0674354768525-0.
INFO 03-02 00:43:50 [logger.py:42] Received request cmpl-85268972e1b74c7cbb27c5e49c333495-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:50 [async_llm.py:261] Added request cmpl-85268972e1b74c7cbb27c5e49c333495-0.
INFO 03-02 00:43:51 [logger.py:42] Received request cmpl-8f8b741610324b378f5894461d2bc8c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:51 [async_llm.py:261] Added request cmpl-8f8b741610324b378f5894461d2bc8c5-0.
INFO 03-02 00:43:53 [logger.py:42] Received request cmpl-94f5b0b1c331482e9666f55ce6d16db4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:53 [async_llm.py:261] Added request cmpl-94f5b0b1c331482e9666f55ce6d16db4-0.
INFO 03-02 00:43:54 [logger.py:42] Received request cmpl-41c2b0929dc243798f4f87be597c365e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:54 [async_llm.py:261] Added request cmpl-41c2b0929dc243798f4f87be597c365e-0.
INFO 03-02 00:43:55 [logger.py:42] Received request cmpl-0a6ff4636cc94a4f9de5c454841e314d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:55 [async_llm.py:261] Added request cmpl-0a6ff4636cc94a4f9de5c454841e314d-0.
INFO 03-02 00:43:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:43:56 [logger.py:42] Received request cmpl-82a066dae2cd40589477ba0bb77e0d98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:56 [async_llm.py:261] Added request cmpl-82a066dae2cd40589477ba0bb77e0d98-0.
INFO 03-02 00:43:57 [logger.py:42] Received request cmpl-51849f4ee9204fe1856851a53d4308ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:57 [async_llm.py:261] Added request cmpl-51849f4ee9204fe1856851a53d4308ac-0.
INFO 03-02 00:43:58 [logger.py:42] Received request cmpl-f9280a434e404175a6e5b7b6ebf12863-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:58 [async_llm.py:261] Added request cmpl-f9280a434e404175a6e5b7b6ebf12863-0.
INFO 03-02 00:43:59 [logger.py:42] Received request cmpl-473b77f4bbbb4539ae4d639470ca7d5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:43:59 [async_llm.py:261] Added request cmpl-473b77f4bbbb4539ae4d639470ca7d5e-0.
INFO 03-02 00:44:00 [logger.py:42] Received request cmpl-7bd815d5cc884643ad00e1f8f5da3b70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:00 [async_llm.py:261] Added request cmpl-7bd815d5cc884643ad00e1f8f5da3b70-0.
INFO 03-02 00:44:01 [logger.py:42] Received request cmpl-4876a30a86664dc49d207676ef8a273e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:01 [async_llm.py:261] Added request cmpl-4876a30a86664dc49d207676ef8a273e-0.
INFO 03-02 00:44:02 [logger.py:42] Received request cmpl-8b0c0be9fd6549b2b8a63710d3722bf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:02 [async_llm.py:261] Added request cmpl-8b0c0be9fd6549b2b8a63710d3722bf2-0.
INFO 03-02 00:44:04 [logger.py:42] Received request cmpl-558a779d6c354adda5efee727fa69515-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:04 [async_llm.py:261] Added request cmpl-558a779d6c354adda5efee727fa69515-0.
INFO 03-02 00:44:05 [logger.py:42] Received request cmpl-8e253821113840678576a1edc6e633d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:05 [async_llm.py:261] Added request cmpl-8e253821113840678576a1edc6e633d9-0.
INFO 03-02 00:44:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:06 [logger.py:42] Received request cmpl-126bb2c4d8bb4d98b86e4f6295b30f01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:06 [async_llm.py:261] Added request cmpl-126bb2c4d8bb4d98b86e4f6295b30f01-0.
INFO 03-02 00:44:07 [logger.py:42] Received request cmpl-b8ff0870a2754c509ff9eb9f41b6869c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:07 [async_llm.py:261] Added request cmpl-b8ff0870a2754c509ff9eb9f41b6869c-0.
INFO 03-02 00:44:08 [logger.py:42] Received request cmpl-d4d1ce1efa294532b7e226b56fbffb47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:08 [async_llm.py:261] Added request cmpl-d4d1ce1efa294532b7e226b56fbffb47-0.
INFO 03-02 00:44:09 [logger.py:42] Received request cmpl-81241aca94a7441a911250247da3480a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:09 [async_llm.py:261] Added request cmpl-81241aca94a7441a911250247da3480a-0.
INFO 03-02 00:44:10 [logger.py:42] Received request cmpl-48b818844e324b55a01c42878c26e5be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:10 [async_llm.py:261] Added request cmpl-48b818844e324b55a01c42878c26e5be-0.
INFO 03-02 00:44:11 [logger.py:42] Received request cmpl-5c95826d20a04e4b9a8f62e26542806e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:11 [async_llm.py:261] Added request cmpl-5c95826d20a04e4b9a8f62e26542806e-0.
INFO 03-02 00:44:12 [logger.py:42] Received request cmpl-c3bd9416c67d42e692bfd65d4400ff99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:12 [async_llm.py:261] Added request cmpl-c3bd9416c67d42e692bfd65d4400ff99-0.
INFO 03-02 00:44:13 [logger.py:42] Received request cmpl-ea43875bc55e482a8a7973d5ad1e0116-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:13 [async_llm.py:261] Added request cmpl-ea43875bc55e482a8a7973d5ad1e0116-0.
INFO 03-02 00:44:15 [logger.py:42] Received request cmpl-67ecc619dc8f47a1a7c6120b93fe5299-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:15 [async_llm.py:261] Added request cmpl-67ecc619dc8f47a1a7c6120b93fe5299-0.
INFO 03-02 00:44:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:16 [logger.py:42] Received request cmpl-75d9a4298c894fafbd0d89a0eb85680b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:16 [async_llm.py:261] Added request cmpl-75d9a4298c894fafbd0d89a0eb85680b-0.
INFO 03-02 00:44:17 [logger.py:42] Received request cmpl-1ebeec6b2bb242eb8fb29df11bc4421d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:17 [async_llm.py:261] Added request cmpl-1ebeec6b2bb242eb8fb29df11bc4421d-0.
INFO 03-02 00:44:18 [logger.py:42] Received request cmpl-fb6123c1731642478bf52c20612ddf75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:18 [async_llm.py:261] Added request cmpl-fb6123c1731642478bf52c20612ddf75-0.
INFO 03-02 00:44:19 [logger.py:42] Received request cmpl-e5f83fd7e82b42ab82b286884f4c70b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:19 [async_llm.py:261] Added request cmpl-e5f83fd7e82b42ab82b286884f4c70b4-0.
INFO 03-02 00:44:20 [logger.py:42] Received request cmpl-14590e076dfe4d96adc774e981b564b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:20 [async_llm.py:261] Added request cmpl-14590e076dfe4d96adc774e981b564b3-0.
INFO 03-02 00:44:21 [logger.py:42] Received request cmpl-86420c8a99724274871631ac33793c7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:21 [async_llm.py:261] Added request cmpl-86420c8a99724274871631ac33793c7d-0.
INFO 03-02 00:44:22 [logger.py:42] Received request cmpl-e6a292d72ad9427593b68e3b52c1ecd3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:22 [async_llm.py:261] Added request cmpl-e6a292d72ad9427593b68e3b52c1ecd3-0.
INFO 03-02 00:44:23 [logger.py:42] Received request cmpl-5737db5ff1504190af34c866c3e4eced-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:23 [async_llm.py:261] Added request cmpl-5737db5ff1504190af34c866c3e4eced-0.
INFO 03-02 00:44:24 [logger.py:42] Received request cmpl-22e2547fd20644199fb89844b5c50611-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:24 [async_llm.py:261] Added request cmpl-22e2547fd20644199fb89844b5c50611-0.
INFO 03-02 00:44:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:25 [logger.py:42] Received request cmpl-314a0fd32e554ce1b356153f82ddae69-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:25 [async_llm.py:261] Added request cmpl-314a0fd32e554ce1b356153f82ddae69-0.
INFO 03-02 00:44:27 [logger.py:42] Received request cmpl-e376bfc819784503bdf0bc9ad5e1cc9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:27 [async_llm.py:261] Added request cmpl-e376bfc819784503bdf0bc9ad5e1cc9c-0.
INFO 03-02 00:44:28 [logger.py:42] Received request cmpl-975075780a7648359564d7a7a556cd4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:28 [async_llm.py:261] Added request cmpl-975075780a7648359564d7a7a556cd4d-0.
INFO 03-02 00:44:29 [logger.py:42] Received request cmpl-8bc8fefac9c243ab8e60a1dcccbe9b79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:29 [async_llm.py:261] Added request cmpl-8bc8fefac9c243ab8e60a1dcccbe9b79-0.
INFO 03-02 00:44:30 [logger.py:42] Received request cmpl-447307395edb4f4fa3bba63860295075-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:30 [async_llm.py:261] Added request cmpl-447307395edb4f4fa3bba63860295075-0.
INFO 03-02 00:44:31 [logger.py:42] Received request cmpl-1f1a3280077c4012b3e1e41f9c8aa36d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:31 [async_llm.py:261] Added request cmpl-1f1a3280077c4012b3e1e41f9c8aa36d-0.
INFO 03-02 00:44:32 [logger.py:42] Received request cmpl-88c8478d6607401eaae5b7eddfb8235a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:32 [async_llm.py:261] Added request cmpl-88c8478d6607401eaae5b7eddfb8235a-0.
INFO 03-02 00:44:33 [logger.py:42] Received request cmpl-d458e0cd2b7146c8a4ae8cdd85e06e77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:33 [async_llm.py:261] Added request cmpl-d458e0cd2b7146c8a4ae8cdd85e06e77-0.
INFO 03-02 00:44:34 [logger.py:42] Received request cmpl-a0fdb5b2865440c9b620010dfe4f6f43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:34 [async_llm.py:261] Added request cmpl-a0fdb5b2865440c9b620010dfe4f6f43-0.
INFO 03-02 00:44:35 [logger.py:42] Received request cmpl-e8033d2d54b5446aae6c0bd59fd31f90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:35 [async_llm.py:261] Added request cmpl-e8033d2d54b5446aae6c0bd59fd31f90-0.
INFO 03-02 00:44:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:36 [logger.py:42] Received request cmpl-05612c4e833e4188b0f79c42674c87b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:36 [async_llm.py:261] Added request cmpl-05612c4e833e4188b0f79c42674c87b7-0.
INFO 03-02 00:44:38 [logger.py:42] Received request cmpl-590f822e363f4a1688b3995d20a2c75a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:38 [async_llm.py:261] Added request cmpl-590f822e363f4a1688b3995d20a2c75a-0.
INFO 03-02 00:44:39 [logger.py:42] Received request cmpl-ae221a7e2fcd4bf79b77e17ffb683873-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:39 [async_llm.py:261] Added request cmpl-ae221a7e2fcd4bf79b77e17ffb683873-0.
INFO 03-02 00:44:40 [logger.py:42] Received request cmpl-f6e3e5f27aad4caabc1bf02669823df2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:40 [async_llm.py:261] Added request cmpl-f6e3e5f27aad4caabc1bf02669823df2-0.
INFO 03-02 00:44:41 [logger.py:42] Received request cmpl-718d970d8fb24406b3100e55a91c3363-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:41 [async_llm.py:261] Added request cmpl-718d970d8fb24406b3100e55a91c3363-0.
INFO 03-02 00:44:42 [logger.py:42] Received request cmpl-26348a80f4fc4ace89dd56128c9502fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:42 [async_llm.py:261] Added request cmpl-26348a80f4fc4ace89dd56128c9502fa-0.
INFO 03-02 00:44:43 [logger.py:42] Received request cmpl-ad5b4ce9c2f14712af7fa895087b9646-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:43 [async_llm.py:261] Added request cmpl-ad5b4ce9c2f14712af7fa895087b9646-0.
INFO 03-02 00:44:44 [logger.py:42] Received request cmpl-dd2946d152e1429d859c414c64a04ea7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:44 [async_llm.py:261] Added request cmpl-dd2946d152e1429d859c414c64a04ea7-0.
INFO 03-02 00:44:45 [logger.py:42] Received request cmpl-76bd3ecebcee469d89c9135b29e7d7f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:45 [async_llm.py:261] Added request cmpl-76bd3ecebcee469d89c9135b29e7d7f2-0.
INFO 03-02 00:44:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:46 [logger.py:42] Received request cmpl-93eec4ed48f042f5a2c30ccc8f226438-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:46 [async_llm.py:261] Added request cmpl-93eec4ed48f042f5a2c30ccc8f226438-0.
INFO 03-02 00:44:47 [logger.py:42] Received request cmpl-c80e277db7484cbeadb8e0c1a3de5506-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:47 [async_llm.py:261] Added request cmpl-c80e277db7484cbeadb8e0c1a3de5506-0.
INFO 03-02 00:44:48 [logger.py:42] Received request cmpl-e15a762a51f742a69e774468a0a81e05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:48 [async_llm.py:261] Added request cmpl-e15a762a51f742a69e774468a0a81e05-0.
INFO 03-02 00:44:50 [logger.py:42] Received request cmpl-fd0ede34dc3a4a38beaf03405d0647f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:50 [async_llm.py:261] Added request cmpl-fd0ede34dc3a4a38beaf03405d0647f8-0.
INFO 03-02 00:44:51 [logger.py:42] Received request cmpl-c0ae7163edee4b76bc504e7f3a67f762-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:51 [async_llm.py:261] Added request cmpl-c0ae7163edee4b76bc504e7f3a67f762-0.
INFO 03-02 00:44:52 [logger.py:42] Received request cmpl-777dec2b8dc142db80e863127723aad2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:52 [async_llm.py:261] Added request cmpl-777dec2b8dc142db80e863127723aad2-0.
INFO 03-02 00:44:53 [logger.py:42] Received request cmpl-02389db4078b415eb7489b55a54e42d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:53 [async_llm.py:261] Added request cmpl-02389db4078b415eb7489b55a54e42d1-0.
INFO 03-02 00:44:54 [logger.py:42] Received request cmpl-d9890371b9794d00a30be4a0875b6e35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:54 [async_llm.py:261] Added request cmpl-d9890371b9794d00a30be4a0875b6e35-0.
INFO 03-02 00:44:55 [logger.py:42] Received request cmpl-8e1bd6f927fd4846962ebbfb58d67eda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:55 [async_llm.py:261] Added request cmpl-8e1bd6f927fd4846962ebbfb58d67eda-0.
INFO 03-02 00:44:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:44:56 [logger.py:42] Received request cmpl-df1467594c6440459bb3f7d3f79ea407-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:56 [async_llm.py:261] Added request cmpl-df1467594c6440459bb3f7d3f79ea407-0.
INFO 03-02 00:44:57 [logger.py:42] Received request cmpl-44105e3831d749ba9bd36e50463959eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:57 [async_llm.py:261] Added request cmpl-44105e3831d749ba9bd36e50463959eb-0.
INFO 03-02 00:44:58 [logger.py:42] Received request cmpl-3bcd3588f2844899b815aed8494859dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:58 [async_llm.py:261] Added request cmpl-3bcd3588f2844899b815aed8494859dc-0.
INFO 03-02 00:44:59 [logger.py:42] Received request cmpl-30a8f27fc13f4d3cab528b7a38b5ce75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:44:59 [async_llm.py:261] Added request cmpl-30a8f27fc13f4d3cab528b7a38b5ce75-0.
INFO 03-02 00:45:01 [logger.py:42] Received request cmpl-0a4eeeaf05c34df7afd217298ec20115-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:01 [async_llm.py:261] Added request cmpl-0a4eeeaf05c34df7afd217298ec20115-0.
INFO 03-02 00:45:02 [logger.py:42] Received request cmpl-6f3ed2819e4d4e598c657624de7de92f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:02 [async_llm.py:261] Added request cmpl-6f3ed2819e4d4e598c657624de7de92f-0.
INFO 03-02 00:45:03 [logger.py:42] Received request cmpl-991226b692e04c208477374d7b83b70c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:03 [async_llm.py:261] Added request cmpl-991226b692e04c208477374d7b83b70c-0.
INFO 03-02 00:45:04 [logger.py:42] Received request cmpl-f58436c5a9be4793a22492e0d9893a77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:04 [async_llm.py:261] Added request cmpl-f58436c5a9be4793a22492e0d9893a77-0.
INFO 03-02 00:45:05 [logger.py:42] Received request cmpl-d91cd527c3ce477b852eb134e7b99764-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:05 [async_llm.py:261] Added request cmpl-d91cd527c3ce477b852eb134e7b99764-0.
INFO 03-02 00:45:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:06 [logger.py:42] Received request cmpl-4b6df9682dad4a518d7f719ebaa342bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:06 [async_llm.py:261] Added request cmpl-4b6df9682dad4a518d7f719ebaa342bb-0.
INFO 03-02 00:45:07 [logger.py:42] Received request cmpl-69ff451b65e547aeb21c17b9e272ceae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:07 [async_llm.py:261] Added request cmpl-69ff451b65e547aeb21c17b9e272ceae-0.
INFO 03-02 00:45:08 [logger.py:42] Received request cmpl-0ebc36ca527e4f8c9063af8b67325277-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:08 [async_llm.py:261] Added request cmpl-0ebc36ca527e4f8c9063af8b67325277-0.
INFO 03-02 00:45:09 [logger.py:42] Received request cmpl-5e7135635d1047c9885a204521a71bc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:09 [async_llm.py:261] Added request cmpl-5e7135635d1047c9885a204521a71bc2-0.
INFO 03-02 00:45:10 [logger.py:42] Received request cmpl-7918f6ce32c647d3870ebafd3962c58e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:10 [async_llm.py:261] Added request cmpl-7918f6ce32c647d3870ebafd3962c58e-0.
INFO 03-02 00:45:11 [logger.py:42] Received request cmpl-b944b021e33b4295b8b425045b02ec04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:11 [async_llm.py:261] Added request cmpl-b944b021e33b4295b8b425045b02ec04-0.
INFO 03-02 00:45:13 [logger.py:42] Received request cmpl-ea3888c0aed64b71a31a409e143e9a5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:13 [async_llm.py:261] Added request cmpl-ea3888c0aed64b71a31a409e143e9a5a-0.
INFO 03-02 00:45:14 [logger.py:42] Received request cmpl-e7fdf35de9a54d5d8ba2bdc8d34cc8ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:14 [async_llm.py:261] Added request cmpl-e7fdf35de9a54d5d8ba2bdc8d34cc8ac-0.
INFO 03-02 00:45:15 [logger.py:42] Received request cmpl-39e2a6914a224f78a110b2eb2114e2c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:15 [async_llm.py:261] Added request cmpl-39e2a6914a224f78a110b2eb2114e2c3-0.
INFO 03-02 00:45:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:16 [logger.py:42] Received request cmpl-d70e69c10e984cda9119451b366c45ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:16 [async_llm.py:261] Added request cmpl-d70e69c10e984cda9119451b366c45ec-0.
INFO 03-02 00:45:17 [logger.py:42] Received request cmpl-449e427e28144f638ff2acb922faafce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:17 [async_llm.py:261] Added request cmpl-449e427e28144f638ff2acb922faafce-0.
INFO 03-02 00:45:18 [logger.py:42] Received request cmpl-b7c06e5d593d4e2a9fe4f0b425d92fa5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:18 [async_llm.py:261] Added request cmpl-b7c06e5d593d4e2a9fe4f0b425d92fa5-0.
INFO 03-02 00:45:19 [logger.py:42] Received request cmpl-6cb0d3494e4347d7826932d6c2f18264-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:19 [async_llm.py:261] Added request cmpl-6cb0d3494e4347d7826932d6c2f18264-0.
INFO 03-02 00:45:20 [logger.py:42] Received request cmpl-673ec6cc89ac4ac7a1cbd71238e72618-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:20 [async_llm.py:261] Added request cmpl-673ec6cc89ac4ac7a1cbd71238e72618-0.
INFO 03-02 00:45:21 [logger.py:42] Received request cmpl-0e292f65588d4ea981b67a91cad70913-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:21 [async_llm.py:261] Added request cmpl-0e292f65588d4ea981b67a91cad70913-0.
INFO 03-02 00:45:22 [logger.py:42] Received request cmpl-36520f48201a4bad946af0dc174a403b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:22 [async_llm.py:261] Added request cmpl-36520f48201a4bad946af0dc174a403b-0.
INFO 03-02 00:45:24 [logger.py:42] Received request cmpl-d6cb5bdc61b84e1bbe0f3835ca6da3f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:24 [async_llm.py:261] Added request cmpl-d6cb5bdc61b84e1bbe0f3835ca6da3f5-0.
INFO 03-02 00:45:25 [logger.py:42] Received request cmpl-eae18729150a442695c5594af458558b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:25 [async_llm.py:261] Added request cmpl-eae18729150a442695c5594af458558b-0.
INFO 03-02 00:45:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:26 [logger.py:42] Received request cmpl-b57644600e654d34b4cbda37f0fb9622-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:26 [async_llm.py:261] Added request cmpl-b57644600e654d34b4cbda37f0fb9622-0.
INFO 03-02 00:45:27 [logger.py:42] Received request cmpl-1bfb5551d0c54703840338284bc4e295-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:27 [async_llm.py:261] Added request cmpl-1bfb5551d0c54703840338284bc4e295-0.
INFO 03-02 00:45:28 [logger.py:42] Received request cmpl-5fcb86ce729d498fa4ece9ca02b04569-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:28 [async_llm.py:261] Added request cmpl-5fcb86ce729d498fa4ece9ca02b04569-0.
INFO 03-02 00:45:29 [logger.py:42] Received request cmpl-16ef47ec15a54d81874f8e3338eb7cc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:29 [async_llm.py:261] Added request cmpl-16ef47ec15a54d81874f8e3338eb7cc9-0.
INFO 03-02 00:45:30 [logger.py:42] Received request cmpl-04f27b9bcbe14243bf92e9de4386731d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:30 [async_llm.py:261] Added request cmpl-04f27b9bcbe14243bf92e9de4386731d-0.
INFO 03-02 00:45:31 [logger.py:42] Received request cmpl-bf38d2c648c24221963b6806436d77fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:31 [async_llm.py:261] Added request cmpl-bf38d2c648c24221963b6806436d77fc-0.
INFO 03-02 00:45:32 [logger.py:42] Received request cmpl-9973e9a5bf3743668c09655e84d3b05c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:32 [async_llm.py:261] Added request cmpl-9973e9a5bf3743668c09655e84d3b05c-0.
INFO 03-02 00:45:33 [logger.py:42] Received request cmpl-5cc3248b6c0e470fbf6a7fe2a4064002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:33 [async_llm.py:261] Added request cmpl-5cc3248b6c0e470fbf6a7fe2a4064002-0.
INFO 03-02 00:45:34 [logger.py:42] Received request cmpl-92fa43d1f6974531bd19de70df655eb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:34 [async_llm.py:261] Added request cmpl-92fa43d1f6974531bd19de70df655eb7-0.
INFO 03-02 00:45:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:36 [logger.py:42] Received request cmpl-bb0ed22d79dd47b88397b8b320f361d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:36 [async_llm.py:261] Added request cmpl-bb0ed22d79dd47b88397b8b320f361d2-0.
INFO 03-02 00:45:37 [logger.py:42] Received request cmpl-96777b20df9a4f0a86dc09bfca84cc52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:37 [async_llm.py:261] Added request cmpl-96777b20df9a4f0a86dc09bfca84cc52-0.
INFO 03-02 00:45:38 [logger.py:42] Received request cmpl-8ce4d655e97d4badae69e42b4b95101b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:38 [async_llm.py:261] Added request cmpl-8ce4d655e97d4badae69e42b4b95101b-0.
INFO 03-02 00:45:39 [logger.py:42] Received request cmpl-ab62e1de96b34765ae11a7aa9de60f1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:39 [async_llm.py:261] Added request cmpl-ab62e1de96b34765ae11a7aa9de60f1d-0.
INFO 03-02 00:45:40 [logger.py:42] Received request cmpl-accdca571940437dae2af7cb33129073-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:40 [async_llm.py:261] Added request cmpl-accdca571940437dae2af7cb33129073-0.
INFO 03-02 00:45:41 [logger.py:42] Received request cmpl-0faa02317b3b475d843048b120974cd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:41 [async_llm.py:261] Added request cmpl-0faa02317b3b475d843048b120974cd6-0.
INFO 03-02 00:45:42 [logger.py:42] Received request cmpl-099b104d0a6d4a95bc03f09a75c6e060-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:42 [async_llm.py:261] Added request cmpl-099b104d0a6d4a95bc03f09a75c6e060-0.
INFO 03-02 00:45:43 [logger.py:42] Received request cmpl-bc3d2610e7d04deca0850790925462cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:43 [async_llm.py:261] Added request cmpl-bc3d2610e7d04deca0850790925462cc-0.
INFO 03-02 00:45:44 [logger.py:42] Received request cmpl-6e63b4f8c5cf456eb439574d83b035ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:44 [async_llm.py:261] Added request cmpl-6e63b4f8c5cf456eb439574d83b035ca-0.
INFO 03-02 00:45:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:45 [logger.py:42] Received request cmpl-603c4fcf2c314738ae1bc8b9f20a4021-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:45 [async_llm.py:261] Added request cmpl-603c4fcf2c314738ae1bc8b9f20a4021-0.
INFO 03-02 00:45:47 [logger.py:42] Received request cmpl-5dd235c019944dd1a91d79fb7ec75354-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:47 [async_llm.py:261] Added request cmpl-5dd235c019944dd1a91d79fb7ec75354-0.
INFO 03-02 00:45:48 [logger.py:42] Received request cmpl-93ff64fcd1ac4e4bbcde31f512f6646a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:48 [async_llm.py:261] Added request cmpl-93ff64fcd1ac4e4bbcde31f512f6646a-0.
INFO 03-02 00:45:49 [logger.py:42] Received request cmpl-cf9b139f97fb445bb916fd6581163382-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:49 [async_llm.py:261] Added request cmpl-cf9b139f97fb445bb916fd6581163382-0.
INFO 03-02 00:45:50 [logger.py:42] Received request cmpl-0608df7e3e0f47dcb00d5c4cf889f4f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:50 [async_llm.py:261] Added request cmpl-0608df7e3e0f47dcb00d5c4cf889f4f6-0.
INFO 03-02 00:45:51 [logger.py:42] Received request cmpl-001b5a58af204c6da8ca88c73223b34a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:51 [async_llm.py:261] Added request cmpl-001b5a58af204c6da8ca88c73223b34a-0.
INFO 03-02 00:45:52 [logger.py:42] Received request cmpl-a020fae4fdfd47279b0c04086a431879-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:52 [async_llm.py:261] Added request cmpl-a020fae4fdfd47279b0c04086a431879-0.
INFO 03-02 00:45:53 [logger.py:42] Received request cmpl-e017dd0c031143d786437af411200866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:53 [async_llm.py:261] Added request cmpl-e017dd0c031143d786437af411200866-0.
INFO 03-02 00:45:54 [logger.py:42] Received request cmpl-83071397b69b418a893a7e56df6e4409-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:54 [async_llm.py:261] Added request cmpl-83071397b69b418a893a7e56df6e4409-0.
INFO 03-02 00:45:55 [logger.py:42] Received request cmpl-c85572c4a3594021a914ff3c1ff34ae0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:55 [async_llm.py:261] Added request cmpl-c85572c4a3594021a914ff3c1ff34ae0-0.
INFO 03-02 00:45:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:45:56 [logger.py:42] Received request cmpl-73e8fcb9bf6a4bdeac030d5d00561c65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:56 [async_llm.py:261] Added request cmpl-73e8fcb9bf6a4bdeac030d5d00561c65-0.
INFO 03-02 00:45:57 [logger.py:42] Received request cmpl-db21ec27e8df43e78ac0f77ef3480981-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:57 [async_llm.py:261] Added request cmpl-db21ec27e8df43e78ac0f77ef3480981-0.
INFO 03-02 00:45:59 [logger.py:42] Received request cmpl-803f19a55398439f9be33c4a2932fdb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:45:59 [async_llm.py:261] Added request cmpl-803f19a55398439f9be33c4a2932fdb3-0.
INFO 03-02 00:46:00 [logger.py:42] Received request cmpl-4e2b406f9a6742d39b9a80f4b5d8fbe6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:00 [async_llm.py:261] Added request cmpl-4e2b406f9a6742d39b9a80f4b5d8fbe6-0.
INFO 03-02 00:46:01 [logger.py:42] Received request cmpl-f4ef41a16dd440e9be3e976e1a050cb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:01 [async_llm.py:261] Added request cmpl-f4ef41a16dd440e9be3e976e1a050cb0-0.
INFO 03-02 00:46:02 [logger.py:42] Received request cmpl-e516c45d048842aa914e41d80daa68d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:02 [async_llm.py:261] Added request cmpl-e516c45d048842aa914e41d80daa68d4-0.
INFO 03-02 00:46:03 [logger.py:42] Received request cmpl-741e654b05d04d2c89e4c4435f5726ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:03 [async_llm.py:261] Added request cmpl-741e654b05d04d2c89e4c4435f5726ff-0.
INFO 03-02 00:46:04 [logger.py:42] Received request cmpl-98ecb3626994499ca1fa8c638d6f58f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:04 [async_llm.py:261] Added request cmpl-98ecb3626994499ca1fa8c638d6f58f4-0.
INFO 03-02 00:46:05 [logger.py:42] Received request cmpl-27aa87ac947f453f97e0fcd0f55960b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:05 [async_llm.py:261] Added request cmpl-27aa87ac947f453f97e0fcd0f55960b8-0.
INFO 03-02 00:46:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:06 [logger.py:42] Received request cmpl-adcce6c1bcd043968c1a988ba0fb927f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:06 [async_llm.py:261] Added request cmpl-adcce6c1bcd043968c1a988ba0fb927f-0.
INFO 03-02 00:46:07 [logger.py:42] Received request cmpl-19580e231c7745cb8b67bcbb99df5ebd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:07 [async_llm.py:261] Added request cmpl-19580e231c7745cb8b67bcbb99df5ebd-0.
INFO 03-02 00:46:08 [logger.py:42] Received request cmpl-e62682eaaee248f397d7dadd5429a643-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:08 [async_llm.py:261] Added request cmpl-e62682eaaee248f397d7dadd5429a643-0.
INFO 03-02 00:46:10 [logger.py:42] Received request cmpl-2a2c34ab55b246d19124a94eefb3df5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:10 [async_llm.py:261] Added request cmpl-2a2c34ab55b246d19124a94eefb3df5e-0.
INFO 03-02 00:46:11 [logger.py:42] Received request cmpl-c17c812c874e4f5cacd2279f19fbb49f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:11 [async_llm.py:261] Added request cmpl-c17c812c874e4f5cacd2279f19fbb49f-0.
INFO 03-02 00:46:12 [logger.py:42] Received request cmpl-bdbc0a03ceed4f92a0e729a507ebea14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:12 [async_llm.py:261] Added request cmpl-bdbc0a03ceed4f92a0e729a507ebea14-0.
INFO 03-02 00:46:13 [logger.py:42] Received request cmpl-7cf6685028cc494389efee11980b40c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:13 [async_llm.py:261] Added request cmpl-7cf6685028cc494389efee11980b40c8-0.
INFO 03-02 00:46:14 [logger.py:42] Received request cmpl-555adaea2249475fa190de6f0106f9ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:14 [async_llm.py:261] Added request cmpl-555adaea2249475fa190de6f0106f9ae-0.
INFO 03-02 00:46:15 [logger.py:42] Received request cmpl-83d6bc66b0c443678a7a21780e7270b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:15 [async_llm.py:261] Added request cmpl-83d6bc66b0c443678a7a21780e7270b4-0.
INFO 03-02 00:46:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:16 [logger.py:42] Received request cmpl-08aea768d1974a22984d65097dc5bae1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:16 [async_llm.py:261] Added request cmpl-08aea768d1974a22984d65097dc5bae1-0.
INFO 03-02 00:46:17 [logger.py:42] Received request cmpl-b036a3a26cb544059d8f63a257391089-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:17 [async_llm.py:261] Added request cmpl-b036a3a26cb544059d8f63a257391089-0.
INFO 03-02 00:46:18 [logger.py:42] Received request cmpl-3c19fd09c9be4aae80e3ec5f99d18d64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:18 [async_llm.py:261] Added request cmpl-3c19fd09c9be4aae80e3ec5f99d18d64-0.
INFO 03-02 00:46:19 [logger.py:42] Received request cmpl-79852156f1cf46b9ac2ff67a49c2c924-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:19 [async_llm.py:261] Added request cmpl-79852156f1cf46b9ac2ff67a49c2c924-0.
INFO 03-02 00:46:20 [logger.py:42] Received request cmpl-5d273d8d898e434ab3991d8f0c932bc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:20 [async_llm.py:261] Added request cmpl-5d273d8d898e434ab3991d8f0c932bc7-0.
INFO 03-02 00:46:22 [logger.py:42] Received request cmpl-08818fe330ce416894fb53e241e61ed4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:22 [async_llm.py:261] Added request cmpl-08818fe330ce416894fb53e241e61ed4-0.
INFO 03-02 00:46:23 [logger.py:42] Received request cmpl-1b6ae47cd8774daf91c69b76a4106f74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:23 [async_llm.py:261] Added request cmpl-1b6ae47cd8774daf91c69b76a4106f74-0.
INFO 03-02 00:46:24 [logger.py:42] Received request cmpl-7c286adfb6854e00b54042213cf02b54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:24 [async_llm.py:261] Added request cmpl-7c286adfb6854e00b54042213cf02b54-0.
INFO 03-02 00:46:25 [logger.py:42] Received request cmpl-12d28d460b9344cf87c54a12c27cf2ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:25 [async_llm.py:261] Added request cmpl-12d28d460b9344cf87c54a12c27cf2ee-0.
INFO 03-02 00:46:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:26 [logger.py:42] Received request cmpl-8c6696e3ef1d4917b9dfd36075f6a770-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:26 [async_llm.py:261] Added request cmpl-8c6696e3ef1d4917b9dfd36075f6a770-0.
INFO 03-02 00:46:27 [logger.py:42] Received request cmpl-5a0116078ba6426aafa134a33f12d4fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:27 [async_llm.py:261] Added request cmpl-5a0116078ba6426aafa134a33f12d4fc-0.
INFO 03-02 00:46:28 [logger.py:42] Received request cmpl-49d49c003cc74e32b38822319e0ea602-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:28 [async_llm.py:261] Added request cmpl-49d49c003cc74e32b38822319e0ea602-0.
INFO 03-02 00:46:29 [logger.py:42] Received request cmpl-1c743f71d4d649299a352475751a5aa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:29 [async_llm.py:261] Added request cmpl-1c743f71d4d649299a352475751a5aa2-0.
INFO 03-02 00:46:30 [logger.py:42] Received request cmpl-a6796c2bb9bf4f03a73822a8eddb6f0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:30 [async_llm.py:261] Added request cmpl-a6796c2bb9bf4f03a73822a8eddb6f0d-0.
INFO 03-02 00:46:31 [logger.py:42] Received request cmpl-3c6d68f3d30144ecb5160d246a0d1553-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:31 [async_llm.py:261] Added request cmpl-3c6d68f3d30144ecb5160d246a0d1553-0.
INFO 03-02 00:46:33 [logger.py:42] Received request cmpl-15ddb2a336f54f46987a884bffc13c30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:33 [async_llm.py:261] Added request cmpl-15ddb2a336f54f46987a884bffc13c30-0.
INFO 03-02 00:46:34 [logger.py:42] Received request cmpl-3e9e70f90e1c4bee84d2e549c33bb6d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:34 [async_llm.py:261] Added request cmpl-3e9e70f90e1c4bee84d2e549c33bb6d7-0.
INFO 03-02 00:46:35 [logger.py:42] Received request cmpl-b263a4199b9e42e89cf605dfff688350-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:35 [async_llm.py:261] Added request cmpl-b263a4199b9e42e89cf605dfff688350-0.
INFO 03-02 00:46:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:36 [logger.py:42] Received request cmpl-7e44e234575041e3a1daaf75ec17bfa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:36 [async_llm.py:261] Added request cmpl-7e44e234575041e3a1daaf75ec17bfa1-0.
INFO 03-02 00:46:37 [logger.py:42] Received request cmpl-390256ea184543a9b163ac7771a6f3e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:37 [async_llm.py:261] Added request cmpl-390256ea184543a9b163ac7771a6f3e8-0.
INFO 03-02 00:46:38 [logger.py:42] Received request cmpl-b8a96a65416d408eb4d528482a06d0ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:38 [async_llm.py:261] Added request cmpl-b8a96a65416d408eb4d528482a06d0ad-0.
INFO 03-02 00:46:39 [logger.py:42] Received request cmpl-4166be9ce3ca4ff4b410ff949a44149d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:39 [async_llm.py:261] Added request cmpl-4166be9ce3ca4ff4b410ff949a44149d-0.
INFO 03-02 00:46:40 [logger.py:42] Received request cmpl-7edca0ffcd444828a3607de97d5cb477-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:40 [async_llm.py:261] Added request cmpl-7edca0ffcd444828a3607de97d5cb477-0.
INFO 03-02 00:46:41 [logger.py:42] Received request cmpl-7c893ab313ba4e3b8a987a617d3ce52b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:41 [async_llm.py:261] Added request cmpl-7c893ab313ba4e3b8a987a617d3ce52b-0.
INFO 03-02 00:46:42 [logger.py:42] Received request cmpl-56b35817b8e74d318715cf5fb6334e1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:42 [async_llm.py:261] Added request cmpl-56b35817b8e74d318715cf5fb6334e1a-0.
INFO 03-02 00:46:44 [logger.py:42] Received request cmpl-357e905d6c884df7a5c9be8bce99f504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:44 [async_llm.py:261] Added request cmpl-357e905d6c884df7a5c9be8bce99f504-0.
INFO 03-02 00:46:45 [logger.py:42] Received request cmpl-9bd819d504df469e8819b5be1c334123-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:45 [async_llm.py:261] Added request cmpl-9bd819d504df469e8819b5be1c334123-0.
INFO 03-02 00:46:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:46 [logger.py:42] Received request cmpl-72905d945edc4f88a2ab2bd163b8add3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:46 [async_llm.py:261] Added request cmpl-72905d945edc4f88a2ab2bd163b8add3-0.
INFO 03-02 00:46:47 [logger.py:42] Received request cmpl-1a816f4c1df145b7899b95d74569a895-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:47 [async_llm.py:261] Added request cmpl-1a816f4c1df145b7899b95d74569a895-0.
INFO 03-02 00:46:48 [logger.py:42] Received request cmpl-7439273b7de64780aa5a72790f89c0d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:48 [async_llm.py:261] Added request cmpl-7439273b7de64780aa5a72790f89c0d2-0.
INFO 03-02 00:46:49 [logger.py:42] Received request cmpl-b0764a37e1e6456b90bbc5c9a03a541b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:49 [async_llm.py:261] Added request cmpl-b0764a37e1e6456b90bbc5c9a03a541b-0.
INFO 03-02 00:46:50 [logger.py:42] Received request cmpl-e6e1517cdded4ba789d75d361cf92a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:50 [async_llm.py:261] Added request cmpl-e6e1517cdded4ba789d75d361cf92a76-0.
INFO 03-02 00:46:51 [logger.py:42] Received request cmpl-86b4b1cbb2284c06b56342441ef8330e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:51 [async_llm.py:261] Added request cmpl-86b4b1cbb2284c06b56342441ef8330e-0.
INFO 03-02 00:46:52 [logger.py:42] Received request cmpl-1e2add8d3f3c4b3fb25843e6e99c6d72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:52 [async_llm.py:261] Added request cmpl-1e2add8d3f3c4b3fb25843e6e99c6d72-0.
INFO 03-02 00:46:53 [logger.py:42] Received request cmpl-26d0f8ecd7904b1696aafaedfc42de1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:53 [async_llm.py:261] Added request cmpl-26d0f8ecd7904b1696aafaedfc42de1d-0.
INFO 03-02 00:46:54 [logger.py:42] Received request cmpl-c51172b9f85e42ee9dd7347b37b37fad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:54 [async_llm.py:261] Added request cmpl-c51172b9f85e42ee9dd7347b37b37fad-0.
INFO 03-02 00:46:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:46:56 [logger.py:42] Received request cmpl-4ceb1208ea1749959ff205a6d040bcca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:56 [async_llm.py:261] Added request cmpl-4ceb1208ea1749959ff205a6d040bcca-0.
INFO 03-02 00:46:57 [logger.py:42] Received request cmpl-863c397651534b539c82002342e0bb13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:57 [async_llm.py:261] Added request cmpl-863c397651534b539c82002342e0bb13-0.
INFO 03-02 00:46:58 [logger.py:42] Received request cmpl-80dc1e514f194b45bae409b5147b9d67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:58 [async_llm.py:261] Added request cmpl-80dc1e514f194b45bae409b5147b9d67-0.
INFO 03-02 00:46:59 [logger.py:42] Received request cmpl-c2ba1c4b9ce44720aec1482016f46ba2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:46:59 [async_llm.py:261] Added request cmpl-c2ba1c4b9ce44720aec1482016f46ba2-0.
INFO 03-02 00:47:00 [logger.py:42] Received request cmpl-ca22f88c8ee2408fa167f7459c35b001-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:00 [async_llm.py:261] Added request cmpl-ca22f88c8ee2408fa167f7459c35b001-0.
INFO 03-02 00:47:01 [logger.py:42] Received request cmpl-5fb6dfa08b42452fa33eaab535af3daf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:01 [async_llm.py:261] Added request cmpl-5fb6dfa08b42452fa33eaab535af3daf-0.
INFO 03-02 00:47:02 [logger.py:42] Received request cmpl-b95aa28ad5a34a9b9009d9a73ec11a71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:02 [async_llm.py:261] Added request cmpl-b95aa28ad5a34a9b9009d9a73ec11a71-0.
INFO 03-02 00:47:03 [logger.py:42] Received request cmpl-00ebe17fdc1b445db704b45427d11e3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:03 [async_llm.py:261] Added request cmpl-00ebe17fdc1b445db704b45427d11e3f-0.
INFO 03-02 00:47:04 [logger.py:42] Received request cmpl-b9984eda2f784af5b381644cc32c3ede-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:04 [async_llm.py:261] Added request cmpl-b9984eda2f784af5b381644cc32c3ede-0.
INFO 03-02 00:47:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:05 [logger.py:42] Received request cmpl-f4216f826a514fa1a117aa2c2ff10df0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:05 [async_llm.py:261] Added request cmpl-f4216f826a514fa1a117aa2c2ff10df0-0.
INFO 03-02 00:47:07 [logger.py:42] Received request cmpl-ddfbe393c22a4758bc417772c358033a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:07 [async_llm.py:261] Added request cmpl-ddfbe393c22a4758bc417772c358033a-0.
INFO 03-02 00:47:08 [logger.py:42] Received request cmpl-7d008a949a074b0d8c2ca3c55eb27113-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:08 [async_llm.py:261] Added request cmpl-7d008a949a074b0d8c2ca3c55eb27113-0.
INFO 03-02 00:47:09 [logger.py:42] Received request cmpl-0e40dacc94df402891824361597501a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:09 [async_llm.py:261] Added request cmpl-0e40dacc94df402891824361597501a5-0.
INFO 03-02 00:47:10 [logger.py:42] Received request cmpl-d24a3cdb31a34f4ca8444320519bd92f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:10 [async_llm.py:261] Added request cmpl-d24a3cdb31a34f4ca8444320519bd92f-0.
INFO 03-02 00:47:11 [logger.py:42] Received request cmpl-5ba105f9e2ba464cb416543b62b8a94d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:11 [async_llm.py:261] Added request cmpl-5ba105f9e2ba464cb416543b62b8a94d-0.
INFO 03-02 00:47:12 [logger.py:42] Received request cmpl-cd1e7eec63ab48599920eb534d53946e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:12 [async_llm.py:261] Added request cmpl-cd1e7eec63ab48599920eb534d53946e-0.
INFO 03-02 00:47:13 [logger.py:42] Received request cmpl-f5c9755b2f084783abcd451575a9d387-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:13 [async_llm.py:261] Added request cmpl-f5c9755b2f084783abcd451575a9d387-0.
INFO 03-02 00:47:14 [logger.py:42] Received request cmpl-b281fba2c71b4e9d91738dfae42a4e43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:14 [async_llm.py:261] Added request cmpl-b281fba2c71b4e9d91738dfae42a4e43-0.
INFO 03-02 00:47:15 [logger.py:42] Received request cmpl-848aa40114ec49e5998d8400724681b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:15 [async_llm.py:261] Added request cmpl-848aa40114ec49e5998d8400724681b3-0.
INFO 03-02 00:47:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:16 [logger.py:42] Received request cmpl-1c4d0b475b554db9a6b8c1614b8a20ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:16 [async_llm.py:261] Added request cmpl-1c4d0b475b554db9a6b8c1614b8a20ce-0.
INFO 03-02 00:47:17 [logger.py:42] Received request cmpl-6cca6ee6976242b6b06d33f69f0e5d28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:17 [async_llm.py:261] Added request cmpl-6cca6ee6976242b6b06d33f69f0e5d28-0.
INFO 03-02 00:47:19 [logger.py:42] Received request cmpl-a0d974528ad5426090af228014809891-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:19 [async_llm.py:261] Added request cmpl-a0d974528ad5426090af228014809891-0.
INFO 03-02 00:47:20 [logger.py:42] Received request cmpl-b31f0a7041aa4248b74bd416030b269a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:20 [async_llm.py:261] Added request cmpl-b31f0a7041aa4248b74bd416030b269a-0.
INFO 03-02 00:47:21 [logger.py:42] Received request cmpl-dea273cdd59e462184b99b4ebe32af16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:21 [async_llm.py:261] Added request cmpl-dea273cdd59e462184b99b4ebe32af16-0.
INFO 03-02 00:47:22 [logger.py:42] Received request cmpl-8a7aade7e23f498ebcbb226dbe2091eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:22 [async_llm.py:261] Added request cmpl-8a7aade7e23f498ebcbb226dbe2091eb-0.
INFO 03-02 00:47:23 [logger.py:42] Received request cmpl-b7aef84a6d8b4c9480ccbabde661c9c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:23 [async_llm.py:261] Added request cmpl-b7aef84a6d8b4c9480ccbabde661c9c0-0.
INFO 03-02 00:47:24 [logger.py:42] Received request cmpl-7b0aa025e085494a8dee5e63bd5b1479-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:24 [async_llm.py:261] Added request cmpl-7b0aa025e085494a8dee5e63bd5b1479-0.
INFO 03-02 00:47:25 [logger.py:42] Received request cmpl-1972e6adfcb6402b8d055ce82635e207-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:25 [async_llm.py:261] Added request cmpl-1972e6adfcb6402b8d055ce82635e207-0.
INFO 03-02 00:47:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:26 [logger.py:42] Received request cmpl-07e59bddc8b74df495cac0cab38d4b89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:26 [async_llm.py:261] Added request cmpl-07e59bddc8b74df495cac0cab38d4b89-0.
INFO 03-02 00:47:27 [logger.py:42] Received request cmpl-de722af9db064c3d82ee2d948f25e453-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:27 [async_llm.py:261] Added request cmpl-de722af9db064c3d82ee2d948f25e453-0.
INFO 03-02 00:47:28 [logger.py:42] Received request cmpl-ca4b790c627c4b4c8275ebe4d2490eea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:28 [async_llm.py:261] Added request cmpl-ca4b790c627c4b4c8275ebe4d2490eea-0.
INFO 03-02 00:47:30 [logger.py:42] Received request cmpl-9baf0e46986b4b69892363cf52917b98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:30 [async_llm.py:261] Added request cmpl-9baf0e46986b4b69892363cf52917b98-0.
INFO 03-02 00:47:31 [logger.py:42] Received request cmpl-9f54c15d0cba4a89a8f946bdb622e19a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:31 [async_llm.py:261] Added request cmpl-9f54c15d0cba4a89a8f946bdb622e19a-0.
INFO 03-02 00:47:32 [logger.py:42] Received request cmpl-7cacb008061547d89a24c6bcafa84d38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:32 [async_llm.py:261] Added request cmpl-7cacb008061547d89a24c6bcafa84d38-0.
INFO 03-02 00:47:33 [logger.py:42] Received request cmpl-a23fd3999b314b0aa83099cc60f256dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:33 [async_llm.py:261] Added request cmpl-a23fd3999b314b0aa83099cc60f256dc-0.
INFO 03-02 00:47:34 [logger.py:42] Received request cmpl-c9c50cc4c63c4c1fbe9b74274748e923-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:34 [async_llm.py:261] Added request cmpl-c9c50cc4c63c4c1fbe9b74274748e923-0.
INFO 03-02 00:47:35 [logger.py:42] Received request cmpl-af6d8aa4a57f4e5e896fa5a31c4536b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:35 [async_llm.py:261] Added request cmpl-af6d8aa4a57f4e5e896fa5a31c4536b2-0.
INFO 03-02 00:47:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:36 [logger.py:42] Received request cmpl-09d08c92d0484a80b363f0436d6ecd26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:36 [async_llm.py:261] Added request cmpl-09d08c92d0484a80b363f0436d6ecd26-0.
INFO 03-02 00:47:37 [logger.py:42] Received request cmpl-58933896440d440d88c7264604e7254e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:37 [async_llm.py:261] Added request cmpl-58933896440d440d88c7264604e7254e-0.
INFO 03-02 00:47:38 [logger.py:42] Received request cmpl-dd588bfe33fe4d5c98890009678ce111-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:38 [async_llm.py:261] Added request cmpl-dd588bfe33fe4d5c98890009678ce111-0.
INFO 03-02 00:47:39 [logger.py:42] Received request cmpl-786a0532290240a69a6fc1344da15353-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:39 [async_llm.py:261] Added request cmpl-786a0532290240a69a6fc1344da15353-0.
INFO 03-02 00:47:41 [logger.py:42] Received request cmpl-3710b51888d646a1b7941cc89cdec5d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:41 [async_llm.py:261] Added request cmpl-3710b51888d646a1b7941cc89cdec5d4-0.
INFO 03-02 00:47:42 [logger.py:42] Received request cmpl-14371c75af7d4df8a86c022a71c6f90a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:42 [async_llm.py:261] Added request cmpl-14371c75af7d4df8a86c022a71c6f90a-0.
INFO 03-02 00:47:43 [logger.py:42] Received request cmpl-776361ee1ff744ccab89b470144dfd37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:43 [async_llm.py:261] Added request cmpl-776361ee1ff744ccab89b470144dfd37-0.
INFO 03-02 00:47:44 [logger.py:42] Received request cmpl-aa1f28c84f6e4f368bca5b4a696191b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:44 [async_llm.py:261] Added request cmpl-aa1f28c84f6e4f368bca5b4a696191b2-0.
INFO 03-02 00:47:45 [logger.py:42] Received request cmpl-3623252aba6c443ca47e630d0d187fed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:45 [async_llm.py:261] Added request cmpl-3623252aba6c443ca47e630d0d187fed-0.
INFO 03-02 00:47:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:46 [logger.py:42] Received request cmpl-af2a308534984e31ad089cde6d0a56d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:46 [async_llm.py:261] Added request cmpl-af2a308534984e31ad089cde6d0a56d1-0.
INFO 03-02 00:47:47 [logger.py:42] Received request cmpl-8979c95a828940cc8066d9b2cf120b65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:47 [async_llm.py:261] Added request cmpl-8979c95a828940cc8066d9b2cf120b65-0.
INFO 03-02 00:47:48 [logger.py:42] Received request cmpl-ea39bf0689b14ab4b319da9e47052f1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:48 [async_llm.py:261] Added request cmpl-ea39bf0689b14ab4b319da9e47052f1c-0.
INFO 03-02 00:47:49 [logger.py:42] Received request cmpl-7de1331692dd4e4eb674f627bdd7e24e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:49 [async_llm.py:261] Added request cmpl-7de1331692dd4e4eb674f627bdd7e24e-0.
INFO 03-02 00:47:50 [logger.py:42] Received request cmpl-c1cc5989274949e294e0d5319fa07f45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:50 [async_llm.py:261] Added request cmpl-c1cc5989274949e294e0d5319fa07f45-0.
INFO 03-02 00:47:51 [logger.py:42] Received request cmpl-d847fcb4211a45baa32c0bc8b333f48c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:51 [async_llm.py:261] Added request cmpl-d847fcb4211a45baa32c0bc8b333f48c-0.
INFO 03-02 00:47:53 [logger.py:42] Received request cmpl-e695ca84cd42407c8791cff477bbf5ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:53 [async_llm.py:261] Added request cmpl-e695ca84cd42407c8791cff477bbf5ff-0.
INFO 03-02 00:47:54 [logger.py:42] Received request cmpl-9b1e4381bed04b5db0df7c6873053ff7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:54 [async_llm.py:261] Added request cmpl-9b1e4381bed04b5db0df7c6873053ff7-0.
INFO 03-02 00:47:55 [logger.py:42] Received request cmpl-a0330dcf8f5249489755322ba854a3ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:55 [async_llm.py:261] Added request cmpl-a0330dcf8f5249489755322ba854a3ca-0.
INFO 03-02 00:47:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:47:56 [logger.py:42] Received request cmpl-690f84a3aef347fab76304e98d4eea20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:56 [async_llm.py:261] Added request cmpl-690f84a3aef347fab76304e98d4eea20-0.
INFO 03-02 00:47:57 [logger.py:42] Received request cmpl-2504699391f4436ca471c487d44a423b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:57 [async_llm.py:261] Added request cmpl-2504699391f4436ca471c487d44a423b-0.
INFO 03-02 00:47:58 [logger.py:42] Received request cmpl-5f4452c285b44452bbbf17f4bd0d5c15-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:58 [async_llm.py:261] Added request cmpl-5f4452c285b44452bbbf17f4bd0d5c15-0.
INFO 03-02 00:47:59 [logger.py:42] Received request cmpl-01ee74393f64400fa9809bea09b0377a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:47:59 [async_llm.py:261] Added request cmpl-01ee74393f64400fa9809bea09b0377a-0.
INFO 03-02 00:48:00 [logger.py:42] Received request cmpl-199f706ad12a410c89208bf44052fb70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:00 [async_llm.py:261] Added request cmpl-199f706ad12a410c89208bf44052fb70-0.
INFO 03-02 00:48:01 [logger.py:42] Received request cmpl-568e7fdf875f4cf78b4cfc2cf56d0030-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:01 [async_llm.py:261] Added request cmpl-568e7fdf875f4cf78b4cfc2cf56d0030-0.
INFO 03-02 00:48:02 [logger.py:42] Received request cmpl-51f27d57aaee49fa9d78454013c96d80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:02 [async_llm.py:261] Added request cmpl-51f27d57aaee49fa9d78454013c96d80-0.
INFO 03-02 00:48:04 [logger.py:42] Received request cmpl-e627b19db5c74cceb32fbe21abe1f3d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:04 [async_llm.py:261] Added request cmpl-e627b19db5c74cceb32fbe21abe1f3d0-0.
INFO 03-02 00:48:05 [logger.py:42] Received request cmpl-9018b0bc7bfa44fa950bfc341bbbfa8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:05 [async_llm.py:261] Added request cmpl-9018b0bc7bfa44fa950bfc341bbbfa8b-0.
INFO 03-02 00:48:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:06 [logger.py:42] Received request cmpl-668c109c74724b9d80f6228027bed242-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:06 [async_llm.py:261] Added request cmpl-668c109c74724b9d80f6228027bed242-0.
INFO 03-02 00:48:07 [logger.py:42] Received request cmpl-e9c3aa053e774490a8ec00f04445183e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:07 [async_llm.py:261] Added request cmpl-e9c3aa053e774490a8ec00f04445183e-0.
INFO 03-02 00:48:08 [logger.py:42] Received request cmpl-c06deb7c74164eb492f9c3767f127c70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:08 [async_llm.py:261] Added request cmpl-c06deb7c74164eb492f9c3767f127c70-0.
INFO 03-02 00:48:09 [logger.py:42] Received request cmpl-c7f2836885f647d384fd7577e888ce95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:09 [async_llm.py:261] Added request cmpl-c7f2836885f647d384fd7577e888ce95-0.
INFO 03-02 00:48:10 [logger.py:42] Received request cmpl-d9950cfcdb414e24867c139533b5aabf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:10 [async_llm.py:261] Added request cmpl-d9950cfcdb414e24867c139533b5aabf-0.
INFO 03-02 00:48:11 [logger.py:42] Received request cmpl-282ded2bced546ba93fbd1df0d0b88c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:11 [async_llm.py:261] Added request cmpl-282ded2bced546ba93fbd1df0d0b88c1-0.
INFO 03-02 00:48:12 [logger.py:42] Received request cmpl-5732d91b0fb244e7b4fea55d3f0a9905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:12 [async_llm.py:261] Added request cmpl-5732d91b0fb244e7b4fea55d3f0a9905-0.
INFO 03-02 00:48:13 [logger.py:42] Received request cmpl-381ea58b94e94d13b4111a0f54904c6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:13 [async_llm.py:261] Added request cmpl-381ea58b94e94d13b4111a0f54904c6f-0.
INFO 03-02 00:48:14 [logger.py:42] Received request cmpl-89d0c6ab957f484ba7c387fbaec8105b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:14 [async_llm.py:261] Added request cmpl-89d0c6ab957f484ba7c387fbaec8105b-0.
INFO 03-02 00:48:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:16 [logger.py:42] Received request cmpl-c1561b9bb5ed45a5a33c86a835cbc368-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:16 [async_llm.py:261] Added request cmpl-c1561b9bb5ed45a5a33c86a835cbc368-0.
INFO 03-02 00:48:17 [logger.py:42] Received request cmpl-35a190970e4742938771961a615bf4b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:17 [async_llm.py:261] Added request cmpl-35a190970e4742938771961a615bf4b2-0.
INFO 03-02 00:48:18 [logger.py:42] Received request cmpl-eeee1be7f3894ff5919c70e353813aa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:18 [async_llm.py:261] Added request cmpl-eeee1be7f3894ff5919c70e353813aa9-0.
INFO 03-02 00:48:19 [logger.py:42] Received request cmpl-c956d6cfc2914da980b606f0027c867f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:19 [async_llm.py:261] Added request cmpl-c956d6cfc2914da980b606f0027c867f-0.
INFO 03-02 00:48:20 [logger.py:42] Received request cmpl-e806eb608ee246d99a5f77a8814ff375-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:20 [async_llm.py:261] Added request cmpl-e806eb608ee246d99a5f77a8814ff375-0.
INFO 03-02 00:48:21 [logger.py:42] Received request cmpl-a3d89274aaa74f43b3a2f579fc992321-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:21 [async_llm.py:261] Added request cmpl-a3d89274aaa74f43b3a2f579fc992321-0.
INFO 03-02 00:48:22 [logger.py:42] Received request cmpl-92627ed1de6044f582b34e084aa9f066-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:22 [async_llm.py:261] Added request cmpl-92627ed1de6044f582b34e084aa9f066-0.
INFO 03-02 00:48:23 [logger.py:42] Received request cmpl-11e51d2e949d4455b7970a9567bd7260-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:23 [async_llm.py:261] Added request cmpl-11e51d2e949d4455b7970a9567bd7260-0.
INFO 03-02 00:48:24 [logger.py:42] Received request cmpl-5e865a9d6ac5414784630cf15f108fa0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:24 [async_llm.py:261] Added request cmpl-5e865a9d6ac5414784630cf15f108fa0-0.
INFO 03-02 00:48:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:25 [logger.py:42] Received request cmpl-f48bbf2fa6dd4caebf9b00d158ce4096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:25 [async_llm.py:261] Added request cmpl-f48bbf2fa6dd4caebf9b00d158ce4096-0.
INFO 03-02 00:48:27 [logger.py:42] Received request cmpl-6480da845cd2427c806fc83a412e80da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:27 [async_llm.py:261] Added request cmpl-6480da845cd2427c806fc83a412e80da-0.
INFO 03-02 00:48:28 [logger.py:42] Received request cmpl-22799b76adec4d349769c11a8972c077-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:28 [async_llm.py:261] Added request cmpl-22799b76adec4d349769c11a8972c077-0.
INFO 03-02 00:48:29 [logger.py:42] Received request cmpl-a7a6b8dfb9bb4a97a2e367ebf5186200-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:29 [async_llm.py:261] Added request cmpl-a7a6b8dfb9bb4a97a2e367ebf5186200-0.
INFO 03-02 00:48:30 [logger.py:42] Received request cmpl-2459bd25206b48baa27cae101f8d18c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:30 [async_llm.py:261] Added request cmpl-2459bd25206b48baa27cae101f8d18c4-0.
INFO 03-02 00:48:31 [logger.py:42] Received request cmpl-ccfa2b3003fa49be8a9a4a044de89da9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:31 [async_llm.py:261] Added request cmpl-ccfa2b3003fa49be8a9a4a044de89da9-0.
INFO 03-02 00:48:32 [logger.py:42] Received request cmpl-452711bf438a4bb78b998fdc368c568c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:32 [async_llm.py:261] Added request cmpl-452711bf438a4bb78b998fdc368c568c-0.
INFO 03-02 00:48:33 [logger.py:42] Received request cmpl-6bfc86fc06e54b71a45239fd59a603fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:33 [async_llm.py:261] Added request cmpl-6bfc86fc06e54b71a45239fd59a603fd-0.
INFO 03-02 00:48:34 [logger.py:42] Received request cmpl-6f635c6eb987452889e4f2ea5ed39ea5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:34 [async_llm.py:261] Added request cmpl-6f635c6eb987452889e4f2ea5ed39ea5-0.
INFO 03-02 00:48:35 [logger.py:42] Received request cmpl-19a1461f77c24173b2748496a565aa4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:35 [async_llm.py:261] Added request cmpl-19a1461f77c24173b2748496a565aa4e-0.
INFO 03-02 00:48:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:36 [logger.py:42] Received request cmpl-1affe17bf3de40cb8638a7b1caa4ca30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:36 [async_llm.py:261] Added request cmpl-1affe17bf3de40cb8638a7b1caa4ca30-0.
INFO 03-02 00:48:38 [logger.py:42] Received request cmpl-31b37f06e1df488ebced469d6792a34c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:38 [async_llm.py:261] Added request cmpl-31b37f06e1df488ebced469d6792a34c-0.
INFO 03-02 00:48:39 [logger.py:42] Received request cmpl-40cab0f9c5a043abb59d285cf5945c38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:39 [async_llm.py:261] Added request cmpl-40cab0f9c5a043abb59d285cf5945c38-0.
INFO 03-02 00:48:40 [logger.py:42] Received request cmpl-7166ce248052476ab0815ed3a6250cb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:40 [async_llm.py:261] Added request cmpl-7166ce248052476ab0815ed3a6250cb6-0.
INFO 03-02 00:48:41 [logger.py:42] Received request cmpl-4d16d5bd0fc8478789644a9d777a2605-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:41 [async_llm.py:261] Added request cmpl-4d16d5bd0fc8478789644a9d777a2605-0.
INFO 03-02 00:48:42 [logger.py:42] Received request cmpl-21e49b86e6ca4263a95ee4754453de16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:42 [async_llm.py:261] Added request cmpl-21e49b86e6ca4263a95ee4754453de16-0.
INFO 03-02 00:48:43 [logger.py:42] Received request cmpl-9a2891dbac0845d38da738b72803c0a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:43 [async_llm.py:261] Added request cmpl-9a2891dbac0845d38da738b72803c0a8-0.
INFO 03-02 00:48:44 [logger.py:42] Received request cmpl-bfd4947887a04d33828946d44fb879d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:44 [async_llm.py:261] Added request cmpl-bfd4947887a04d33828946d44fb879d2-0.
INFO 03-02 00:48:45 [logger.py:42] Received request cmpl-f4be7525f7f74f2fb6357af567e855a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:45 [async_llm.py:261] Added request cmpl-f4be7525f7f74f2fb6357af567e855a9-0.
INFO 03-02 00:48:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:46 [logger.py:42] Received request cmpl-8515cd91fbb5489299e6b388109c1d29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:46 [async_llm.py:261] Added request cmpl-8515cd91fbb5489299e6b388109c1d29-0.
INFO 03-02 00:48:47 [logger.py:42] Received request cmpl-8aeb745c24fa4a668abc02fae350404f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:47 [async_llm.py:261] Added request cmpl-8aeb745c24fa4a668abc02fae350404f-0.
INFO 03-02 00:48:48 [logger.py:42] Received request cmpl-433dc905d4854a9f998a97d2e70867ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:48 [async_llm.py:261] Added request cmpl-433dc905d4854a9f998a97d2e70867ed-0.
INFO 03-02 00:48:50 [logger.py:42] Received request cmpl-1510ea377b3f44168b8dc159813ceaaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:50 [async_llm.py:261] Added request cmpl-1510ea377b3f44168b8dc159813ceaaa-0.
INFO 03-02 00:48:51 [logger.py:42] Received request cmpl-4a66f5e307204a7eb048d6d8ce781a9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:51 [async_llm.py:261] Added request cmpl-4a66f5e307204a7eb048d6d8ce781a9c-0.
INFO 03-02 00:48:52 [logger.py:42] Received request cmpl-e2d8ed60f1174a28a089c90c2f2f55ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:52 [async_llm.py:261] Added request cmpl-e2d8ed60f1174a28a089c90c2f2f55ae-0.
INFO 03-02 00:48:53 [logger.py:42] Received request cmpl-7b0a0c5ae3d64a9391310ae7d23c012f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:53 [async_llm.py:261] Added request cmpl-7b0a0c5ae3d64a9391310ae7d23c012f-0.
INFO 03-02 00:48:54 [logger.py:42] Received request cmpl-3750da0dd38d4435b7d137b8cee90528-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:54 [async_llm.py:261] Added request cmpl-3750da0dd38d4435b7d137b8cee90528-0.
INFO 03-02 00:48:55 [logger.py:42] Received request cmpl-3a7dabf1bda34156b45b1a7c3c8ec562-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:55 [async_llm.py:261] Added request cmpl-3a7dabf1bda34156b45b1a7c3c8ec562-0.
INFO 03-02 00:48:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:48:56 [logger.py:42] Received request cmpl-459b47d9543e48a0a8f6e2c04f54e1f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:56 [async_llm.py:261] Added request cmpl-459b47d9543e48a0a8f6e2c04f54e1f2-0.
INFO 03-02 00:48:57 [logger.py:42] Received request cmpl-b18ab2aa768341f78177d9ec7f8cb231-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:57 [async_llm.py:261] Added request cmpl-b18ab2aa768341f78177d9ec7f8cb231-0.
INFO 03-02 00:48:58 [logger.py:42] Received request cmpl-b809e9f3f95b4bd19e7f6a89549da9b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:58 [async_llm.py:261] Added request cmpl-b809e9f3f95b4bd19e7f6a89549da9b2-0.
INFO 03-02 00:48:59 [logger.py:42] Received request cmpl-7532e37e889247eeba81ef5ef4dd9c31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:48:59 [async_llm.py:261] Added request cmpl-7532e37e889247eeba81ef5ef4dd9c31-0.
INFO 03-02 00:49:01 [logger.py:42] Received request cmpl-690bf63ec9c04570b60f9d889468aa65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:01 [async_llm.py:261] Added request cmpl-690bf63ec9c04570b60f9d889468aa65-0.
INFO 03-02 00:49:02 [logger.py:42] Received request cmpl-2487bce7f97a4d8496977e91eadfa812-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:02 [async_llm.py:261] Added request cmpl-2487bce7f97a4d8496977e91eadfa812-0.
INFO 03-02 00:49:03 [logger.py:42] Received request cmpl-9c2d64ee50944e2ba81c49af320f1de9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:03 [async_llm.py:261] Added request cmpl-9c2d64ee50944e2ba81c49af320f1de9-0.
INFO 03-02 00:49:04 [logger.py:42] Received request cmpl-9962d2ebe6f4439b826f6acf66c19ca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:04 [async_llm.py:261] Added request cmpl-9962d2ebe6f4439b826f6acf66c19ca3-0.
INFO 03-02 00:49:05 [logger.py:42] Received request cmpl-917ab16e9bf642eaa75e719c3dc4e365-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:05 [async_llm.py:261] Added request cmpl-917ab16e9bf642eaa75e719c3dc4e365-0.
INFO 03-02 00:49:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:06 [logger.py:42] Received request cmpl-7ab79d07cd5044f8b15144101645f853-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:06 [async_llm.py:261] Added request cmpl-7ab79d07cd5044f8b15144101645f853-0.
INFO 03-02 00:49:07 [logger.py:42] Received request cmpl-cc80479ed66444cca195cd5935005e8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:07 [async_llm.py:261] Added request cmpl-cc80479ed66444cca195cd5935005e8b-0.
INFO 03-02 00:49:08 [logger.py:42] Received request cmpl-cf81f0685b1c451580e0e08917cc662d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:08 [async_llm.py:261] Added request cmpl-cf81f0685b1c451580e0e08917cc662d-0.
INFO 03-02 00:49:09 [logger.py:42] Received request cmpl-0b279d983fc54ea49ca785747efc5c60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:09 [async_llm.py:261] Added request cmpl-0b279d983fc54ea49ca785747efc5c60-0.
INFO 03-02 00:49:10 [logger.py:42] Received request cmpl-4202c9bfff4b4d298ea264e1d2fbfe83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:10 [async_llm.py:261] Added request cmpl-4202c9bfff4b4d298ea264e1d2fbfe83-0.
INFO 03-02 00:49:11 [logger.py:42] Received request cmpl-77e48e305b5a495198e455d524a0a7fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:11 [async_llm.py:261] Added request cmpl-77e48e305b5a495198e455d524a0a7fc-0.
INFO 03-02 00:49:13 [logger.py:42] Received request cmpl-494fafdee29f4adcbdc61b81cf85c08e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:13 [async_llm.py:261] Added request cmpl-494fafdee29f4adcbdc61b81cf85c08e-0.
INFO 03-02 00:49:14 [logger.py:42] Received request cmpl-7a166202932f48d89b1b7603990836b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:14 [async_llm.py:261] Added request cmpl-7a166202932f48d89b1b7603990836b7-0.
INFO 03-02 00:49:15 [logger.py:42] Received request cmpl-32d836495a0e40e18171c3109dcb5d47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:15 [async_llm.py:261] Added request cmpl-32d836495a0e40e18171c3109dcb5d47-0.
INFO 03-02 00:49:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:16 [logger.py:42] Received request cmpl-52dcff986e6c46f9b50463346b3c37ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:16 [async_llm.py:261] Added request cmpl-52dcff986e6c46f9b50463346b3c37ed-0.
INFO 03-02 00:49:17 [logger.py:42] Received request cmpl-17ceca3237b142bebe1aa815ab5e6056-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:17 [async_llm.py:261] Added request cmpl-17ceca3237b142bebe1aa815ab5e6056-0.
INFO 03-02 00:49:18 [logger.py:42] Received request cmpl-d8a923c64c4a4ad7bc9e20135485b9a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:18 [async_llm.py:261] Added request cmpl-d8a923c64c4a4ad7bc9e20135485b9a2-0.
INFO 03-02 00:49:19 [logger.py:42] Received request cmpl-d7edaa7f8d8e42ee8756b826814d767e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:19 [async_llm.py:261] Added request cmpl-d7edaa7f8d8e42ee8756b826814d767e-0.
INFO 03-02 00:49:20 [logger.py:42] Received request cmpl-7b221a3cf6f84f2baff76e3ccd25e8d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:20 [async_llm.py:261] Added request cmpl-7b221a3cf6f84f2baff76e3ccd25e8d4-0.
INFO 03-02 00:49:21 [logger.py:42] Received request cmpl-e74d79d08c1b4377a5048f9609469eb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:21 [async_llm.py:261] Added request cmpl-e74d79d08c1b4377a5048f9609469eb6-0.
INFO 03-02 00:49:22 [logger.py:42] Received request cmpl-a5f9ef2d14d64413a2ce9faa05c6d92f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:22 [async_llm.py:261] Added request cmpl-a5f9ef2d14d64413a2ce9faa05c6d92f-0.
INFO 03-02 00:49:24 [logger.py:42] Received request cmpl-ad862f8c274d4a108ba37a9279c31a42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:24 [async_llm.py:261] Added request cmpl-ad862f8c274d4a108ba37a9279c31a42-0.
INFO 03-02 00:49:25 [logger.py:42] Received request cmpl-6e775706b0524a66bad0cac5140fabac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:25 [async_llm.py:261] Added request cmpl-6e775706b0524a66bad0cac5140fabac-0.
INFO 03-02 00:49:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:26 [logger.py:42] Received request cmpl-c1aa4984054c4df1a4b540425690667d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:26 [async_llm.py:261] Added request cmpl-c1aa4984054c4df1a4b540425690667d-0.
INFO 03-02 00:49:27 [logger.py:42] Received request cmpl-ba3941498736404d9c91c3e617cd5959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:27 [async_llm.py:261] Added request cmpl-ba3941498736404d9c91c3e617cd5959-0.
INFO 03-02 00:49:28 [logger.py:42] Received request cmpl-923eaad591484003ba9e8037328d4d79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:28 [async_llm.py:261] Added request cmpl-923eaad591484003ba9e8037328d4d79-0.
INFO 03-02 00:49:29 [logger.py:42] Received request cmpl-f6ac6dc30fbb4283a0417e88ea5aabff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:29 [async_llm.py:261] Added request cmpl-f6ac6dc30fbb4283a0417e88ea5aabff-0.
INFO 03-02 00:49:30 [logger.py:42] Received request cmpl-d24c284f66b74fb1b061f01eca58f793-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:30 [async_llm.py:261] Added request cmpl-d24c284f66b74fb1b061f01eca58f793-0.
INFO 03-02 00:49:31 [logger.py:42] Received request cmpl-223dc101bdc543e7899e1098ac2a90e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:31 [async_llm.py:261] Added request cmpl-223dc101bdc543e7899e1098ac2a90e0-0.
INFO 03-02 00:49:32 [logger.py:42] Received request cmpl-a33520db7ae14936bdeaca280d8d6b94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:32 [async_llm.py:261] Added request cmpl-a33520db7ae14936bdeaca280d8d6b94-0.
INFO 03-02 00:49:33 [logger.py:42] Received request cmpl-ba44cf44bb284d148ca928ffc3c54a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:33 [async_llm.py:261] Added request cmpl-ba44cf44bb284d148ca928ffc3c54a76-0.
INFO 03-02 00:49:34 [logger.py:42] Received request cmpl-330103d10f404e73a27c24c19753b9e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:34 [async_llm.py:261] Added request cmpl-330103d10f404e73a27c24c19753b9e2-0.
INFO 03-02 00:49:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:36 [logger.py:42] Received request cmpl-3fe320ddc78b402683e2fe9d63e62219-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:36 [async_llm.py:261] Added request cmpl-3fe320ddc78b402683e2fe9d63e62219-0.
INFO 03-02 00:49:37 [logger.py:42] Received request cmpl-7caac645618c49e3a04cd79cf8cc1bbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:37 [async_llm.py:261] Added request cmpl-7caac645618c49e3a04cd79cf8cc1bbe-0.
INFO 03-02 00:49:38 [logger.py:42] Received request cmpl-a2a6d57ebc7b47dba0df9fde9c10b322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:38 [async_llm.py:261] Added request cmpl-a2a6d57ebc7b47dba0df9fde9c10b322-0.
INFO 03-02 00:49:39 [logger.py:42] Received request cmpl-7f06895575a24bd6b1a2437cb18befbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:39 [async_llm.py:261] Added request cmpl-7f06895575a24bd6b1a2437cb18befbe-0.
INFO 03-02 00:49:40 [logger.py:42] Received request cmpl-c343eb9378b944439d2b05609d29b41e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:40 [async_llm.py:261] Added request cmpl-c343eb9378b944439d2b05609d29b41e-0.
INFO 03-02 00:49:41 [logger.py:42] Received request cmpl-9338e922a58743818ebde5e83952a5c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:41 [async_llm.py:261] Added request cmpl-9338e922a58743818ebde5e83952a5c1-0.
INFO 03-02 00:49:42 [logger.py:42] Received request cmpl-43ff4f9993b147ca944a6c25a4790599-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:42 [async_llm.py:261] Added request cmpl-43ff4f9993b147ca944a6c25a4790599-0.
INFO 03-02 00:49:43 [logger.py:42] Received request cmpl-c39935b4919a4f6cb982a004ebba560a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:43 [async_llm.py:261] Added request cmpl-c39935b4919a4f6cb982a004ebba560a-0.
INFO 03-02 00:49:44 [logger.py:42] Received request cmpl-9159471b93804f66a1154b6f61635e46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:44 [async_llm.py:261] Added request cmpl-9159471b93804f66a1154b6f61635e46-0.
INFO 03-02 00:49:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:45 [logger.py:42] Received request cmpl-655d427003a04cf4931e414f91f48ade-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:45 [async_llm.py:261] Added request cmpl-655d427003a04cf4931e414f91f48ade-0.
INFO 03-02 00:49:47 [logger.py:42] Received request cmpl-4290e0337a0c42458ac0ab89e76312ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:47 [async_llm.py:261] Added request cmpl-4290e0337a0c42458ac0ab89e76312ba-0.
INFO 03-02 00:49:48 [logger.py:42] Received request cmpl-21bfb86072344a0381a4a948cd56f441-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:48 [async_llm.py:261] Added request cmpl-21bfb86072344a0381a4a948cd56f441-0.
INFO 03-02 00:49:49 [logger.py:42] Received request cmpl-6b66990df7d64c56953520f2cd8afe8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:49 [async_llm.py:261] Added request cmpl-6b66990df7d64c56953520f2cd8afe8e-0.
INFO 03-02 00:49:50 [logger.py:42] Received request cmpl-064d3f8890444bab9f5e9a98802baee3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:50 [async_llm.py:261] Added request cmpl-064d3f8890444bab9f5e9a98802baee3-0.
INFO 03-02 00:49:51 [logger.py:42] Received request cmpl-ed86ea1788d74e659be7abed4abe6fda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:51 [async_llm.py:261] Added request cmpl-ed86ea1788d74e659be7abed4abe6fda-0.
INFO 03-02 00:49:52 [logger.py:42] Received request cmpl-33ec766b405b48b1b34865d3c59935aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:52 [async_llm.py:261] Added request cmpl-33ec766b405b48b1b34865d3c59935aa-0.
INFO 03-02 00:49:53 [logger.py:42] Received request cmpl-270bfd6d7d924134aec13677018f6c3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:53 [async_llm.py:261] Added request cmpl-270bfd6d7d924134aec13677018f6c3a-0.
INFO 03-02 00:49:54 [logger.py:42] Received request cmpl-86e386ebd3a748bd871df89c592874f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:54 [async_llm.py:261] Added request cmpl-86e386ebd3a748bd871df89c592874f0-0.
INFO 03-02 00:49:55 [logger.py:42] Received request cmpl-98123e6798a04f8fb9249daa7c471557-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:55 [async_llm.py:261] Added request cmpl-98123e6798a04f8fb9249daa7c471557-0.
INFO 03-02 00:49:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:49:56 [logger.py:42] Received request cmpl-42b4c9134b8242b8b408b025d1eb0347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:56 [async_llm.py:261] Added request cmpl-42b4c9134b8242b8b408b025d1eb0347-0.
INFO 03-02 00:49:57 [logger.py:42] Received request cmpl-26da5651e3844808ae513c25d96e02ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:57 [async_llm.py:261] Added request cmpl-26da5651e3844808ae513c25d96e02ab-0.
INFO 03-02 00:49:59 [logger.py:42] Received request cmpl-a8bf92cf8afa4be2af6bd505df527f54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:49:59 [async_llm.py:261] Added request cmpl-a8bf92cf8afa4be2af6bd505df527f54-0.
INFO 03-02 00:50:00 [logger.py:42] Received request cmpl-d8a6a5df7d44412eaccd07f3ab4382a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:00 [async_llm.py:261] Added request cmpl-d8a6a5df7d44412eaccd07f3ab4382a1-0.
INFO 03-02 00:50:01 [logger.py:42] Received request cmpl-c32e9d8549444beb90c91db81a62ce85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:01 [async_llm.py:261] Added request cmpl-c32e9d8549444beb90c91db81a62ce85-0.
INFO 03-02 00:50:02 [logger.py:42] Received request cmpl-e51dfef5b71647a2ad984aed8d1b7c47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:02 [async_llm.py:261] Added request cmpl-e51dfef5b71647a2ad984aed8d1b7c47-0.
INFO 03-02 00:50:03 [logger.py:42] Received request cmpl-ddd97533f8bb47c39ecebc5e6377b864-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:03 [async_llm.py:261] Added request cmpl-ddd97533f8bb47c39ecebc5e6377b864-0.
INFO 03-02 00:50:04 [logger.py:42] Received request cmpl-b69afc7148024b27b0957989eb2bbf61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:04 [async_llm.py:261] Added request cmpl-b69afc7148024b27b0957989eb2bbf61-0.
INFO 03-02 00:50:05 [logger.py:42] Received request cmpl-380c3ea24e3e41f89ccaf24e3995fa84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:05 [async_llm.py:261] Added request cmpl-380c3ea24e3e41f89ccaf24e3995fa84-0.
INFO 03-02 00:50:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:06 [logger.py:42] Received request cmpl-9b4059f0e6b749f1b1e3f608ef948010-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:06 [async_llm.py:261] Added request cmpl-9b4059f0e6b749f1b1e3f608ef948010-0.
INFO 03-02 00:50:07 [logger.py:42] Received request cmpl-0177356d35e942cb80662057ffaa8295-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:07 [async_llm.py:261] Added request cmpl-0177356d35e942cb80662057ffaa8295-0.
INFO 03-02 00:50:08 [logger.py:42] Received request cmpl-e6a96bd3ec37498b8ac4dc1384d96c39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:08 [async_llm.py:261] Added request cmpl-e6a96bd3ec37498b8ac4dc1384d96c39-0.
INFO 03-02 00:50:10 [logger.py:42] Received request cmpl-99a5630733a84523a3ef459d51498e7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:10 [async_llm.py:261] Added request cmpl-99a5630733a84523a3ef459d51498e7a-0.
INFO 03-02 00:50:11 [logger.py:42] Received request cmpl-b96aab94639c443daceb2b04b1222ae1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:11 [async_llm.py:261] Added request cmpl-b96aab94639c443daceb2b04b1222ae1-0.
INFO 03-02 00:50:12 [logger.py:42] Received request cmpl-d11ff4a685054a6886bfc61b31e8fa89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:12 [async_llm.py:261] Added request cmpl-d11ff4a685054a6886bfc61b31e8fa89-0.
INFO 03-02 00:50:13 [logger.py:42] Received request cmpl-6624836380844a30bcce69ff62f49293-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:13 [async_llm.py:261] Added request cmpl-6624836380844a30bcce69ff62f49293-0.
INFO 03-02 00:50:14 [logger.py:42] Received request cmpl-cc7913638aeb45c29b841f34ef396b5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:14 [async_llm.py:261] Added request cmpl-cc7913638aeb45c29b841f34ef396b5b-0.
INFO 03-02 00:50:15 [logger.py:42] Received request cmpl-d61361f9c9344d969e924742930b17d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:15 [async_llm.py:261] Added request cmpl-d61361f9c9344d969e924742930b17d0-0.
INFO 03-02 00:50:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:16 [logger.py:42] Received request cmpl-7bed2ce1bf2b43429683a4ecb617fdbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:16 [async_llm.py:261] Added request cmpl-7bed2ce1bf2b43429683a4ecb617fdbc-0.
INFO 03-02 00:50:17 [logger.py:42] Received request cmpl-045a1cb74b4a4d13a3528bf3d8423638-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:17 [async_llm.py:261] Added request cmpl-045a1cb74b4a4d13a3528bf3d8423638-0.
INFO 03-02 00:50:18 [logger.py:42] Received request cmpl-cdfa75ce571c4c7a83672471296aa98d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:18 [async_llm.py:261] Added request cmpl-cdfa75ce571c4c7a83672471296aa98d-0.
INFO 03-02 00:50:19 [logger.py:42] Received request cmpl-c1a04666e0d9481d84ff2608540a4b64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:19 [async_llm.py:261] Added request cmpl-c1a04666e0d9481d84ff2608540a4b64-0.
INFO 03-02 00:50:20 [logger.py:42] Received request cmpl-e3710838946e4ac4b40d3cc9c5430c56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:20 [async_llm.py:261] Added request cmpl-e3710838946e4ac4b40d3cc9c5430c56-0.
INFO 03-02 00:50:22 [logger.py:42] Received request cmpl-f37d2213a018455f96552a28c2ebcda9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:22 [async_llm.py:261] Added request cmpl-f37d2213a018455f96552a28c2ebcda9-0.
INFO 03-02 00:50:23 [logger.py:42] Received request cmpl-24430bd08db34e69bf7a18ae69111bd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:23 [async_llm.py:261] Added request cmpl-24430bd08db34e69bf7a18ae69111bd7-0.
INFO 03-02 00:50:24 [logger.py:42] Received request cmpl-33384fc9065a464cbe35d7c4a7adc0e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:24 [async_llm.py:261] Added request cmpl-33384fc9065a464cbe35d7c4a7adc0e9-0.
INFO 03-02 00:50:25 [logger.py:42] Received request cmpl-aa468ad5ad0a446897619fdb9de34f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:25 [async_llm.py:261] Added request cmpl-aa468ad5ad0a446897619fdb9de34f9b-0.
INFO 03-02 00:50:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:26 [logger.py:42] Received request cmpl-e5c808d956b5443ea30c870844cd487d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:26 [async_llm.py:261] Added request cmpl-e5c808d956b5443ea30c870844cd487d-0.
INFO 03-02 00:50:27 [logger.py:42] Received request cmpl-7c5f32f2e1a4420093874e352ee7a8fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:27 [async_llm.py:261] Added request cmpl-7c5f32f2e1a4420093874e352ee7a8fb-0.
INFO 03-02 00:50:28 [logger.py:42] Received request cmpl-47b26fb737a24ac6874e132ef6cfa685-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:28 [async_llm.py:261] Added request cmpl-47b26fb737a24ac6874e132ef6cfa685-0.
INFO 03-02 00:50:29 [logger.py:42] Received request cmpl-304962ec926045aca17e853b5f2cc86a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:29 [async_llm.py:261] Added request cmpl-304962ec926045aca17e853b5f2cc86a-0.
INFO 03-02 00:50:30 [logger.py:42] Received request cmpl-6cabf9a116184b18990770f5915859c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:30 [async_llm.py:261] Added request cmpl-6cabf9a116184b18990770f5915859c3-0.
INFO 03-02 00:50:31 [logger.py:42] Received request cmpl-5615e33c11474cc0b6741fc355362fe2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:31 [async_llm.py:261] Added request cmpl-5615e33c11474cc0b6741fc355362fe2-0.
INFO 03-02 00:50:33 [logger.py:42] Received request cmpl-eafa8483aca74bd58d703e5a7d3829b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:33 [async_llm.py:261] Added request cmpl-eafa8483aca74bd58d703e5a7d3829b7-0.
INFO 03-02 00:50:34 [logger.py:42] Received request cmpl-0d98a789bd0c4e73b880b9ee73cc6ac4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:34 [async_llm.py:261] Added request cmpl-0d98a789bd0c4e73b880b9ee73cc6ac4-0.
INFO 03-02 00:50:35 [logger.py:42] Received request cmpl-e60d5fadbbb04b5ca97536595a503aef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:35 [async_llm.py:261] Added request cmpl-e60d5fadbbb04b5ca97536595a503aef-0.
INFO 03-02 00:50:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:36 [logger.py:42] Received request cmpl-8ebd45e0ed4140698659cd6a219428f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:36 [async_llm.py:261] Added request cmpl-8ebd45e0ed4140698659cd6a219428f0-0.
INFO 03-02 00:50:37 [logger.py:42] Received request cmpl-3a6bd61320584ccbbdab6cd19cd03be5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:37 [async_llm.py:261] Added request cmpl-3a6bd61320584ccbbdab6cd19cd03be5-0.
INFO 03-02 00:50:38 [logger.py:42] Received request cmpl-bcd2b62ecf3c41d786d8da01a22211a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:38 [async_llm.py:261] Added request cmpl-bcd2b62ecf3c41d786d8da01a22211a0-0.
INFO 03-02 00:50:39 [logger.py:42] Received request cmpl-6b7d20a46266478ba7f2fb9217d82a1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:39 [async_llm.py:261] Added request cmpl-6b7d20a46266478ba7f2fb9217d82a1e-0.
INFO 03-02 00:50:40 [logger.py:42] Received request cmpl-0712a47c9caa4ad3ae65caeb39eb91bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:40 [async_llm.py:261] Added request cmpl-0712a47c9caa4ad3ae65caeb39eb91bd-0.
INFO 03-02 00:50:41 [logger.py:42] Received request cmpl-3968195dac144a61808c8e56f430ef9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:41 [async_llm.py:261] Added request cmpl-3968195dac144a61808c8e56f430ef9b-0.
INFO 03-02 00:50:42 [logger.py:42] Received request cmpl-f3b410d84f0741d9bf515ac98235c8cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:42 [async_llm.py:261] Added request cmpl-f3b410d84f0741d9bf515ac98235c8cf-0.
INFO 03-02 00:50:44 [logger.py:42] Received request cmpl-301833ad7f034771949ad6f92aaf9794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:44 [async_llm.py:261] Added request cmpl-301833ad7f034771949ad6f92aaf9794-0.
INFO 03-02 00:50:45 [logger.py:42] Received request cmpl-d9034924be18422e89b41ad545dbb5e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:45 [async_llm.py:261] Added request cmpl-d9034924be18422e89b41ad545dbb5e9-0.
INFO 03-02 00:50:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:46 [logger.py:42] Received request cmpl-4d88f79ac4c84224b51c59084d132b88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:46 [async_llm.py:261] Added request cmpl-4d88f79ac4c84224b51c59084d132b88-0.
INFO 03-02 00:50:47 [logger.py:42] Received request cmpl-19ef906dc678414aa6be398ee1110645-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:47 [async_llm.py:261] Added request cmpl-19ef906dc678414aa6be398ee1110645-0.
INFO 03-02 00:50:48 [logger.py:42] Received request cmpl-54804f9ff7b040debb923b899cd8223f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:48 [async_llm.py:261] Added request cmpl-54804f9ff7b040debb923b899cd8223f-0.
INFO 03-02 00:50:49 [logger.py:42] Received request cmpl-74761914a63d4f9d813ebcfe02407361-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:49 [async_llm.py:261] Added request cmpl-74761914a63d4f9d813ebcfe02407361-0.
INFO 03-02 00:50:50 [logger.py:42] Received request cmpl-153536276526418b8099caff72c6750c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:50 [async_llm.py:261] Added request cmpl-153536276526418b8099caff72c6750c-0.
INFO 03-02 00:50:51 [logger.py:42] Received request cmpl-ef9d2ce3125742219627f6ca4647553b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:51 [async_llm.py:261] Added request cmpl-ef9d2ce3125742219627f6ca4647553b-0.
INFO 03-02 00:50:52 [logger.py:42] Received request cmpl-6926dadee6914d5099e2fb9dc0e6636a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:52 [async_llm.py:261] Added request cmpl-6926dadee6914d5099e2fb9dc0e6636a-0.
INFO 03-02 00:50:53 [logger.py:42] Received request cmpl-a4d7e0ec82324c27a57fd8975beb036b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:53 [async_llm.py:261] Added request cmpl-a4d7e0ec82324c27a57fd8975beb036b-0.
INFO 03-02 00:50:54 [logger.py:42] Received request cmpl-edd3b43033f746cfbe1182fc551594d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:54 [async_llm.py:261] Added request cmpl-edd3b43033f746cfbe1182fc551594d1-0.
INFO 03-02 00:50:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:50:56 [logger.py:42] Received request cmpl-bb214d4edfed4efbac92f759bf554298-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:56 [async_llm.py:261] Added request cmpl-bb214d4edfed4efbac92f759bf554298-0.
INFO 03-02 00:50:57 [logger.py:42] Received request cmpl-335b6a34194a485dafbb23f8cec1be6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:57 [async_llm.py:261] Added request cmpl-335b6a34194a485dafbb23f8cec1be6b-0.
INFO 03-02 00:50:58 [logger.py:42] Received request cmpl-47e1e993d85040db80905a577aa9023b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:58 [async_llm.py:261] Added request cmpl-47e1e993d85040db80905a577aa9023b-0.
INFO 03-02 00:50:59 [logger.py:42] Received request cmpl-a0e2148294344590b76ff71943cc07d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:50:59 [async_llm.py:261] Added request cmpl-a0e2148294344590b76ff71943cc07d3-0.
INFO 03-02 00:51:00 [logger.py:42] Received request cmpl-8ed8981974ae43c5a872aa0c6e11383a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:00 [async_llm.py:261] Added request cmpl-8ed8981974ae43c5a872aa0c6e11383a-0.
INFO 03-02 00:51:01 [logger.py:42] Received request cmpl-9a80e9fb98da48a0b40647b3e1a6d867-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:01 [async_llm.py:261] Added request cmpl-9a80e9fb98da48a0b40647b3e1a6d867-0.
INFO 03-02 00:51:02 [logger.py:42] Received request cmpl-d5353aaa7b0d42ce97a9d0d4419315a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:02 [async_llm.py:261] Added request cmpl-d5353aaa7b0d42ce97a9d0d4419315a8-0.
INFO 03-02 00:51:03 [logger.py:42] Received request cmpl-03f97b323753457abcdb1d5748f3ee0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:03 [async_llm.py:261] Added request cmpl-03f97b323753457abcdb1d5748f3ee0d-0.
INFO 03-02 00:51:04 [logger.py:42] Received request cmpl-7083ddc4f47140ef9819cf2fd95eb467-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:04 [async_llm.py:261] Added request cmpl-7083ddc4f47140ef9819cf2fd95eb467-0.
INFO 03-02 00:51:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:05 [logger.py:42] Received request cmpl-9608a9c5fa954c38b9b31cb7bb5e6e71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:05 [async_llm.py:261] Added request cmpl-9608a9c5fa954c38b9b31cb7bb5e6e71-0.
INFO 03-02 00:51:07 [logger.py:42] Received request cmpl-7d246edf826b40a7a9fd83a845b272b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:07 [async_llm.py:261] Added request cmpl-7d246edf826b40a7a9fd83a845b272b4-0.
INFO 03-02 00:51:08 [logger.py:42] Received request cmpl-657e5297b130480b8ec99aa0bdd3aa1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:08 [async_llm.py:261] Added request cmpl-657e5297b130480b8ec99aa0bdd3aa1a-0.
INFO 03-02 00:51:09 [logger.py:42] Received request cmpl-df5691312e154ebab9b96db371100446-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:09 [async_llm.py:261] Added request cmpl-df5691312e154ebab9b96db371100446-0.
INFO 03-02 00:51:10 [logger.py:42] Received request cmpl-7576e1ee43ef4dc29526cff79da8a31c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:10 [async_llm.py:261] Added request cmpl-7576e1ee43ef4dc29526cff79da8a31c-0.
INFO 03-02 00:51:11 [logger.py:42] Received request cmpl-78d9e5f909c74250b84a99d81d9c36d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:11 [async_llm.py:261] Added request cmpl-78d9e5f909c74250b84a99d81d9c36d5-0.
INFO 03-02 00:51:12 [logger.py:42] Received request cmpl-48390de60964494583608860852367d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:12 [async_llm.py:261] Added request cmpl-48390de60964494583608860852367d0-0.
INFO 03-02 00:51:13 [logger.py:42] Received request cmpl-5f7bb358f5a3489a9fabad6e90743ee1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:13 [async_llm.py:261] Added request cmpl-5f7bb358f5a3489a9fabad6e90743ee1-0.
INFO 03-02 00:51:14 [logger.py:42] Received request cmpl-cf62c7f984a947c4aa822d57fb1089ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:14 [async_llm.py:261] Added request cmpl-cf62c7f984a947c4aa822d57fb1089ff-0.
INFO 03-02 00:51:15 [logger.py:42] Received request cmpl-6a45ac4be79a47139807173a753eeb42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:15 [async_llm.py:261] Added request cmpl-6a45ac4be79a47139807173a753eeb42-0.
INFO 03-02 00:51:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:16 [logger.py:42] Received request cmpl-813aa4badb074b55958d0740e7da8ad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:16 [async_llm.py:261] Added request cmpl-813aa4badb074b55958d0740e7da8ad6-0.
INFO 03-02 00:51:17 [logger.py:42] Received request cmpl-9cd06ffe42504beb81c6e9affd2c3256-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:17 [async_llm.py:261] Added request cmpl-9cd06ffe42504beb81c6e9affd2c3256-0.
INFO 03-02 00:51:19 [logger.py:42] Received request cmpl-1f1816c9b5494241991b87d85a663443-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:19 [async_llm.py:261] Added request cmpl-1f1816c9b5494241991b87d85a663443-0.
INFO 03-02 00:51:20 [logger.py:42] Received request cmpl-e2c5cc0dabd04532a80acaf265b14d81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:20 [async_llm.py:261] Added request cmpl-e2c5cc0dabd04532a80acaf265b14d81-0.
INFO 03-02 00:51:21 [logger.py:42] Received request cmpl-c9736f2fe2234613b7420ffca50edbab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:21 [async_llm.py:261] Added request cmpl-c9736f2fe2234613b7420ffca50edbab-0.
INFO 03-02 00:51:22 [logger.py:42] Received request cmpl-8573052fa0b84a0f8c451f905fcbb197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:22 [async_llm.py:261] Added request cmpl-8573052fa0b84a0f8c451f905fcbb197-0.
INFO 03-02 00:51:23 [logger.py:42] Received request cmpl-2b02a9d75fb1466e892c3b01f0534305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:23 [async_llm.py:261] Added request cmpl-2b02a9d75fb1466e892c3b01f0534305-0.
INFO 03-02 00:51:24 [logger.py:42] Received request cmpl-36a86a270cd340eebbfa65bb18a1ed4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:24 [async_llm.py:261] Added request cmpl-36a86a270cd340eebbfa65bb18a1ed4f-0.
INFO 03-02 00:51:25 [logger.py:42] Received request cmpl-c271148aef324c43b92cf4ec6d56aa49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:25 [async_llm.py:261] Added request cmpl-c271148aef324c43b92cf4ec6d56aa49-0.
INFO 03-02 00:51:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:26 [logger.py:42] Received request cmpl-9a54a9cf3cdb44a69068ccf990dab511-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:26 [async_llm.py:261] Added request cmpl-9a54a9cf3cdb44a69068ccf990dab511-0.
INFO 03-02 00:51:27 [logger.py:42] Received request cmpl-b1c68e05406f4912a4949f9bd0cf07fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:27 [async_llm.py:261] Added request cmpl-b1c68e05406f4912a4949f9bd0cf07fd-0.
INFO 03-02 00:51:28 [logger.py:42] Received request cmpl-689c7ab194e74f4ca0e6177c11feb75f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:28 [async_llm.py:261] Added request cmpl-689c7ab194e74f4ca0e6177c11feb75f-0.
INFO 03-02 00:51:30 [logger.py:42] Received request cmpl-944067dbc91e4526b36874ce1640e4bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:30 [async_llm.py:261] Added request cmpl-944067dbc91e4526b36874ce1640e4bd-0.
INFO 03-02 00:51:31 [logger.py:42] Received request cmpl-878d33a4c6a243d390a8173c43f3a031-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:31 [async_llm.py:261] Added request cmpl-878d33a4c6a243d390a8173c43f3a031-0.
INFO 03-02 00:51:32 [logger.py:42] Received request cmpl-c0bb7ef11bd8493db882b606e513d6bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:32 [async_llm.py:261] Added request cmpl-c0bb7ef11bd8493db882b606e513d6bc-0.
INFO 03-02 00:51:33 [logger.py:42] Received request cmpl-239f4439625f4222bbfd2162dedbcd35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:33 [async_llm.py:261] Added request cmpl-239f4439625f4222bbfd2162dedbcd35-0.
INFO 03-02 00:51:34 [logger.py:42] Received request cmpl-9f22a3c7108945899d5218194d5bfb20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:34 [async_llm.py:261] Added request cmpl-9f22a3c7108945899d5218194d5bfb20-0.
INFO 03-02 00:51:35 [logger.py:42] Received request cmpl-81c0fbb5302d48c39dbacecf3393d946-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:35 [async_llm.py:261] Added request cmpl-81c0fbb5302d48c39dbacecf3393d946-0.
INFO 03-02 00:51:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:36 [logger.py:42] Received request cmpl-5bedcc1a1b284fe585d539bf9f80728b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:36 [async_llm.py:261] Added request cmpl-5bedcc1a1b284fe585d539bf9f80728b-0.
INFO 03-02 00:51:37 [logger.py:42] Received request cmpl-d9c782bd17034cc58598ded1d5afd212-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:37 [async_llm.py:261] Added request cmpl-d9c782bd17034cc58598ded1d5afd212-0.
INFO 03-02 00:51:38 [logger.py:42] Received request cmpl-8f2e7f42161546ea9298af3d84aa34d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:38 [async_llm.py:261] Added request cmpl-8f2e7f42161546ea9298af3d84aa34d2-0.
INFO 03-02 00:51:39 [logger.py:42] Received request cmpl-1e6622abe651466083e6acd0e6633c0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:39 [async_llm.py:261] Added request cmpl-1e6622abe651466083e6acd0e6633c0e-0.
INFO 03-02 00:51:40 [logger.py:42] Received request cmpl-336ea5a456274ceea268dee5059cc1d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:40 [async_llm.py:261] Added request cmpl-336ea5a456274ceea268dee5059cc1d8-0.
INFO 03-02 00:51:42 [logger.py:42] Received request cmpl-3f833403b9734b02bb354ff7c44ee33c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:42 [async_llm.py:261] Added request cmpl-3f833403b9734b02bb354ff7c44ee33c-0.
INFO 03-02 00:51:43 [logger.py:42] Received request cmpl-8196091093a743f0803abd14b3f0be0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:43 [async_llm.py:261] Added request cmpl-8196091093a743f0803abd14b3f0be0f-0.
INFO 03-02 00:51:44 [logger.py:42] Received request cmpl-baa6f7ab716d4501a25bb63750ad3383-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:44 [async_llm.py:261] Added request cmpl-baa6f7ab716d4501a25bb63750ad3383-0.
INFO 03-02 00:51:45 [logger.py:42] Received request cmpl-187ea49bc4d3441dac2d51aa61dc00f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:45 [async_llm.py:261] Added request cmpl-187ea49bc4d3441dac2d51aa61dc00f1-0.
INFO 03-02 00:51:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:46 [logger.py:42] Received request cmpl-47f1adf9b20441d1acaef4148e3d0583-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:46 [async_llm.py:261] Added request cmpl-47f1adf9b20441d1acaef4148e3d0583-0.
INFO 03-02 00:51:47 [logger.py:42] Received request cmpl-8c6624e3981342259bd12ea55e613565-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:47 [async_llm.py:261] Added request cmpl-8c6624e3981342259bd12ea55e613565-0.
INFO 03-02 00:51:48 [logger.py:42] Received request cmpl-87af1d69df084965b85d84295c41ae13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:48 [async_llm.py:261] Added request cmpl-87af1d69df084965b85d84295c41ae13-0.
INFO 03-02 00:51:49 [logger.py:42] Received request cmpl-1842d17ad6184f4dbe6cf0562b543be2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:49 [async_llm.py:261] Added request cmpl-1842d17ad6184f4dbe6cf0562b543be2-0.
INFO 03-02 00:51:50 [logger.py:42] Received request cmpl-8c55bc3da8e047b9a9febdb19653db03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:50 [async_llm.py:261] Added request cmpl-8c55bc3da8e047b9a9febdb19653db03-0.
INFO 03-02 00:51:51 [logger.py:42] Received request cmpl-99a4d7515d6b4a3d8dd3dd010078e509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:51 [async_llm.py:261] Added request cmpl-99a4d7515d6b4a3d8dd3dd010078e509-0.
INFO 03-02 00:51:53 [logger.py:42] Received request cmpl-dd6ba8a7af5c45a7ba93d4c2e00a90e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:53 [async_llm.py:261] Added request cmpl-dd6ba8a7af5c45a7ba93d4c2e00a90e9-0.
INFO 03-02 00:51:54 [logger.py:42] Received request cmpl-a1323bdd92d2427dbe1db30edd69ad58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:54 [async_llm.py:261] Added request cmpl-a1323bdd92d2427dbe1db30edd69ad58-0.
INFO 03-02 00:51:55 [logger.py:42] Received request cmpl-615464bea56342729465fd86ac9d1d83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:55 [async_llm.py:261] Added request cmpl-615464bea56342729465fd86ac9d1d83-0.
INFO 03-02 00:51:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:51:56 [logger.py:42] Received request cmpl-f451d4aca1ae46b29cbb3c5ffb71186d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:56 [async_llm.py:261] Added request cmpl-f451d4aca1ae46b29cbb3c5ffb71186d-0.
INFO 03-02 00:51:57 [logger.py:42] Received request cmpl-056141b6d5594fb2a7f33bb821b3c9a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:57 [async_llm.py:261] Added request cmpl-056141b6d5594fb2a7f33bb821b3c9a1-0.
INFO 03-02 00:51:58 [logger.py:42] Received request cmpl-53d7ac51548843c9ab24391e6793399e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:58 [async_llm.py:261] Added request cmpl-53d7ac51548843c9ab24391e6793399e-0.
INFO 03-02 00:51:59 [logger.py:42] Received request cmpl-4fa3238777c544ed9a7ec56c55d12217-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:51:59 [async_llm.py:261] Added request cmpl-4fa3238777c544ed9a7ec56c55d12217-0.
INFO 03-02 00:52:00 [logger.py:42] Received request cmpl-dbafddca52e8489a9b7c943a9ceffdc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:00 [async_llm.py:261] Added request cmpl-dbafddca52e8489a9b7c943a9ceffdc0-0.
INFO 03-02 00:52:01 [logger.py:42] Received request cmpl-5a2322a2d95645bd9225c11ea5176e38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:01 [async_llm.py:261] Added request cmpl-5a2322a2d95645bd9225c11ea5176e38-0.
INFO 03-02 00:52:02 [logger.py:42] Received request cmpl-0374b7c6c1534b448a9e0f770d67002f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:02 [async_llm.py:261] Added request cmpl-0374b7c6c1534b448a9e0f770d67002f-0.
INFO 03-02 00:52:03 [logger.py:42] Received request cmpl-02335558fd6b4214938eb3c357ca2875-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:03 [async_llm.py:261] Added request cmpl-02335558fd6b4214938eb3c357ca2875-0.
INFO 03-02 00:52:05 [logger.py:42] Received request cmpl-22bd0f7602814d2cb0c5bba89119155a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:05 [async_llm.py:261] Added request cmpl-22bd0f7602814d2cb0c5bba89119155a-0.
INFO 03-02 00:52:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:06 [logger.py:42] Received request cmpl-0b776a39afbb456a8562cfa0d874c3bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:06 [async_llm.py:261] Added request cmpl-0b776a39afbb456a8562cfa0d874c3bd-0.
INFO 03-02 00:52:07 [logger.py:42] Received request cmpl-c044e943d8d9494d931d3958b67e0a2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:07 [async_llm.py:261] Added request cmpl-c044e943d8d9494d931d3958b67e0a2d-0.
INFO 03-02 00:52:08 [logger.py:42] Received request cmpl-b79dbfcd555347d4822e8bde1289971c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:08 [async_llm.py:261] Added request cmpl-b79dbfcd555347d4822e8bde1289971c-0.
INFO 03-02 00:52:09 [logger.py:42] Received request cmpl-af8d5a7c6ccf4dbcba97444b9a20da39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:09 [async_llm.py:261] Added request cmpl-af8d5a7c6ccf4dbcba97444b9a20da39-0.
INFO 03-02 00:52:10 [logger.py:42] Received request cmpl-78a2bb91f0994efb9518f31d57eaa41e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:10 [async_llm.py:261] Added request cmpl-78a2bb91f0994efb9518f31d57eaa41e-0.
INFO 03-02 00:52:11 [logger.py:42] Received request cmpl-cc45879a3142459fb328f80126a09e3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:11 [async_llm.py:261] Added request cmpl-cc45879a3142459fb328f80126a09e3b-0.
INFO 03-02 00:52:12 [logger.py:42] Received request cmpl-b3c4295434924028a9b60058395b7322-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:12 [async_llm.py:261] Added request cmpl-b3c4295434924028a9b60058395b7322-0.
INFO 03-02 00:52:13 [logger.py:42] Received request cmpl-66ddfeee15374d9395229f4a9257093c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:13 [async_llm.py:261] Added request cmpl-66ddfeee15374d9395229f4a9257093c-0.
INFO 03-02 00:52:14 [logger.py:42] Received request cmpl-797de595a5c741cd84c254fae25ce290-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:14 [async_llm.py:261] Added request cmpl-797de595a5c741cd84c254fae25ce290-0.
INFO 03-02 00:52:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:16 [logger.py:42] Received request cmpl-5f1ae97abdd04b54a9f6b7c9528142f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:16 [async_llm.py:261] Added request cmpl-5f1ae97abdd04b54a9f6b7c9528142f4-0.
INFO 03-02 00:52:17 [logger.py:42] Received request cmpl-0a2b784efb384dfdbd1ba0e77f718325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:17 [async_llm.py:261] Added request cmpl-0a2b784efb384dfdbd1ba0e77f718325-0.
INFO 03-02 00:52:18 [logger.py:42] Received request cmpl-db74aa1038a84b5ea683c3f6277b4d96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:18 [async_llm.py:261] Added request cmpl-db74aa1038a84b5ea683c3f6277b4d96-0.
INFO 03-02 00:52:19 [logger.py:42] Received request cmpl-541c64051e114c1c9146ad1c18f2e00b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:19 [async_llm.py:261] Added request cmpl-541c64051e114c1c9146ad1c18f2e00b-0.
INFO 03-02 00:52:20 [logger.py:42] Received request cmpl-118d9b0a84c74b699e7e123444b5231c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:20 [async_llm.py:261] Added request cmpl-118d9b0a84c74b699e7e123444b5231c-0.
INFO 03-02 00:52:21 [logger.py:42] Received request cmpl-5c9943f428864428b8e028d897fb9e07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:21 [async_llm.py:261] Added request cmpl-5c9943f428864428b8e028d897fb9e07-0.
INFO 03-02 00:52:22 [logger.py:42] Received request cmpl-e2f6376e1e4546ab9710336492be3dfc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:22 [async_llm.py:261] Added request cmpl-e2f6376e1e4546ab9710336492be3dfc-0.
INFO 03-02 00:52:23 [logger.py:42] Received request cmpl-674bf016ec0a4c34ab050dd1bde2db10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:23 [async_llm.py:261] Added request cmpl-674bf016ec0a4c34ab050dd1bde2db10-0.
INFO 03-02 00:52:24 [logger.py:42] Received request cmpl-49c4e01407e44165aa6acd4981515fd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:24 [async_llm.py:261] Added request cmpl-49c4e01407e44165aa6acd4981515fd8-0.
INFO 03-02 00:52:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:25 [logger.py:42] Received request cmpl-a26308d65a21483b9c0cf0b3a71e3186-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:25 [async_llm.py:261] Added request cmpl-a26308d65a21483b9c0cf0b3a71e3186-0.
INFO 03-02 00:52:27 [logger.py:42] Received request cmpl-337bd2e27b884390a605215bdcaccc92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:27 [async_llm.py:261] Added request cmpl-337bd2e27b884390a605215bdcaccc92-0.
INFO 03-02 00:52:28 [logger.py:42] Received request cmpl-40dae04f75cc4830811127c50b8ce790-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:28 [async_llm.py:261] Added request cmpl-40dae04f75cc4830811127c50b8ce790-0.
INFO 03-02 00:52:29 [logger.py:42] Received request cmpl-53fcc42dba91435e91239971ea1e6041-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:29 [async_llm.py:261] Added request cmpl-53fcc42dba91435e91239971ea1e6041-0.
INFO 03-02 00:52:30 [logger.py:42] Received request cmpl-3309df0498fc48ff808ad11b12a68005-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:30 [async_llm.py:261] Added request cmpl-3309df0498fc48ff808ad11b12a68005-0.
INFO 03-02 00:52:31 [logger.py:42] Received request cmpl-ca8223e821ae4015adb6b6d5d7eebd3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:31 [async_llm.py:261] Added request cmpl-ca8223e821ae4015adb6b6d5d7eebd3f-0.
INFO 03-02 00:52:32 [logger.py:42] Received request cmpl-ef127fee2bf4466e8f4c31a4a20c2af2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:32 [async_llm.py:261] Added request cmpl-ef127fee2bf4466e8f4c31a4a20c2af2-0.
INFO 03-02 00:52:33 [logger.py:42] Received request cmpl-a46beb2fe61f4d34972b767c7b508084-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:33 [async_llm.py:261] Added request cmpl-a46beb2fe61f4d34972b767c7b508084-0.
INFO 03-02 00:52:34 [logger.py:42] Received request cmpl-3c9bbafe10544085ac644e3e8d829b9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:34 [async_llm.py:261] Added request cmpl-3c9bbafe10544085ac644e3e8d829b9c-0.
INFO 03-02 00:52:35 [logger.py:42] Received request cmpl-3d40c107c33848849f474b438ae961b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:35 [async_llm.py:261] Added request cmpl-3d40c107c33848849f474b438ae961b6-0.
INFO 03-02 00:52:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:36 [logger.py:42] Received request cmpl-842eea4f021d487a9fc114bd81973c26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:36 [async_llm.py:261] Added request cmpl-842eea4f021d487a9fc114bd81973c26-0.
INFO 03-02 00:52:37 [logger.py:42] Received request cmpl-aeeae6d3106f4596bbbbff1eaf5a1dcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:37 [async_llm.py:261] Added request cmpl-aeeae6d3106f4596bbbbff1eaf5a1dcd-0.
INFO 03-02 00:52:39 [logger.py:42] Received request cmpl-4b3a544544f3405eb2bc5c20d49062c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:39 [async_llm.py:261] Added request cmpl-4b3a544544f3405eb2bc5c20d49062c4-0.
INFO 03-02 00:52:40 [logger.py:42] Received request cmpl-fa67c0cae6484a2782e9c76db43efd66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:40 [async_llm.py:261] Added request cmpl-fa67c0cae6484a2782e9c76db43efd66-0.
INFO 03-02 00:52:41 [logger.py:42] Received request cmpl-00f8721ad3e84dabb0c619ce1b632410-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:41 [async_llm.py:261] Added request cmpl-00f8721ad3e84dabb0c619ce1b632410-0.
INFO 03-02 00:52:42 [logger.py:42] Received request cmpl-dbd3a6030ad94be3b056d0fff4453d66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:42 [async_llm.py:261] Added request cmpl-dbd3a6030ad94be3b056d0fff4453d66-0.
INFO 03-02 00:52:43 [logger.py:42] Received request cmpl-3e46c17d49d341ae930e856c6db62901-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:43 [async_llm.py:261] Added request cmpl-3e46c17d49d341ae930e856c6db62901-0.
INFO 03-02 00:52:44 [logger.py:42] Received request cmpl-396b5e722fa04b42be1ead694bd0a92d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:44 [async_llm.py:261] Added request cmpl-396b5e722fa04b42be1ead694bd0a92d-0.
INFO 03-02 00:52:45 [logger.py:42] Received request cmpl-7ec1805ead024030b84d23ec8824b63d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:45 [async_llm.py:261] Added request cmpl-7ec1805ead024030b84d23ec8824b63d-0.
INFO 03-02 00:52:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:46 [logger.py:42] Received request cmpl-2489281065f04545b038d58ee992147e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:46 [async_llm.py:261] Added request cmpl-2489281065f04545b038d58ee992147e-0.
INFO 03-02 00:52:47 [logger.py:42] Received request cmpl-71c0b22b7ed745ebbbe7e4ed96a20794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:47 [async_llm.py:261] Added request cmpl-71c0b22b7ed745ebbbe7e4ed96a20794-0.
INFO 03-02 00:52:48 [logger.py:42] Received request cmpl-058e3c29a0ff48b1a54290f1c288d1f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:48 [async_llm.py:261] Added request cmpl-058e3c29a0ff48b1a54290f1c288d1f9-0.
INFO 03-02 00:52:50 [logger.py:42] Received request cmpl-c28288fb6c924ce4afea9927cac48243-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:50 [async_llm.py:261] Added request cmpl-c28288fb6c924ce4afea9927cac48243-0.
INFO 03-02 00:52:51 [logger.py:42] Received request cmpl-888fca9ca4cc48968339a3173e6a2bf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:51 [async_llm.py:261] Added request cmpl-888fca9ca4cc48968339a3173e6a2bf5-0.
INFO 03-02 00:52:52 [logger.py:42] Received request cmpl-12a084daa5f84cbd91e43e7f83d2ee80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:52 [async_llm.py:261] Added request cmpl-12a084daa5f84cbd91e43e7f83d2ee80-0.
INFO 03-02 00:52:53 [logger.py:42] Received request cmpl-3c1c79ba8d174d2d91e0905aa6484f34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:53 [async_llm.py:261] Added request cmpl-3c1c79ba8d174d2d91e0905aa6484f34-0.
INFO 03-02 00:52:54 [logger.py:42] Received request cmpl-292a0dc2d9a8430786c844952607a325-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:54 [async_llm.py:261] Added request cmpl-292a0dc2d9a8430786c844952607a325-0.
INFO 03-02 00:52:55 [logger.py:42] Received request cmpl-eb817ae704834b878d17aa337de4785a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:55 [async_llm.py:261] Added request cmpl-eb817ae704834b878d17aa337de4785a-0.
INFO 03-02 00:52:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:52:56 [logger.py:42] Received request cmpl-20ff258f0d5d4a6c867d8b0b53851bb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:56 [async_llm.py:261] Added request cmpl-20ff258f0d5d4a6c867d8b0b53851bb0-0.
INFO 03-02 00:52:57 [logger.py:42] Received request cmpl-3e294b23b5254dc98f670dbbc14e7359-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:57 [async_llm.py:261] Added request cmpl-3e294b23b5254dc98f670dbbc14e7359-0.
INFO 03-02 00:52:58 [logger.py:42] Received request cmpl-b9218a5d26754c7c84c4a6abdd4950fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:58 [async_llm.py:261] Added request cmpl-b9218a5d26754c7c84c4a6abdd4950fe-0.
INFO 03-02 00:52:59 [logger.py:42] Received request cmpl-00c68c71374a4611a019b6563ead9a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:52:59 [async_llm.py:261] Added request cmpl-00c68c71374a4611a019b6563ead9a76-0.
INFO 03-02 00:53:00 [logger.py:42] Received request cmpl-d904a413dc51424fb15f2cabea99e7c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:00 [async_llm.py:261] Added request cmpl-d904a413dc51424fb15f2cabea99e7c1-0.
INFO 03-02 00:53:02 [logger.py:42] Received request cmpl-671b90ea29cb45518fe8911fb342d695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:02 [async_llm.py:261] Added request cmpl-671b90ea29cb45518fe8911fb342d695-0.
INFO 03-02 00:53:03 [logger.py:42] Received request cmpl-9eceb5be3dbe453097c65a7d65110a8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:03 [async_llm.py:261] Added request cmpl-9eceb5be3dbe453097c65a7d65110a8d-0.
INFO 03-02 00:53:04 [logger.py:42] Received request cmpl-0bd4c4d2738e4962971ad22b1e424dd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:04 [async_llm.py:261] Added request cmpl-0bd4c4d2738e4962971ad22b1e424dd7-0.
INFO 03-02 00:53:05 [logger.py:42] Received request cmpl-f35a216f8c7848859db6112002a1b499-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:05 [async_llm.py:261] Added request cmpl-f35a216f8c7848859db6112002a1b499-0.
INFO 03-02 00:53:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:06 [logger.py:42] Received request cmpl-e670e7ccef044fc08a33abe18b6c78ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:06 [async_llm.py:261] Added request cmpl-e670e7ccef044fc08a33abe18b6c78ea-0.
INFO 03-02 00:53:07 [logger.py:42] Received request cmpl-b164c0e90bd64988aaafd6a58c7725d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:07 [async_llm.py:261] Added request cmpl-b164c0e90bd64988aaafd6a58c7725d6-0.
INFO 03-02 00:53:08 [logger.py:42] Received request cmpl-e5e2ca15c0eb4399b78abb97e45e94dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:08 [async_llm.py:261] Added request cmpl-e5e2ca15c0eb4399b78abb97e45e94dc-0.
INFO 03-02 00:53:09 [logger.py:42] Received request cmpl-aa8c93c83a574ebaaa5d4988638baa2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:09 [async_llm.py:261] Added request cmpl-aa8c93c83a574ebaaa5d4988638baa2f-0.
INFO 03-02 00:53:10 [logger.py:42] Received request cmpl-865abe4cfdb34156a682a610c4bd2eec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:10 [async_llm.py:261] Added request cmpl-865abe4cfdb34156a682a610c4bd2eec-0.
INFO 03-02 00:53:11 [logger.py:42] Received request cmpl-eb4641dd87734d2f85efeb1d662f54bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:11 [async_llm.py:261] Added request cmpl-eb4641dd87734d2f85efeb1d662f54bb-0.
INFO 03-02 00:53:13 [logger.py:42] Received request cmpl-7320c04c826241b7aece756d91bd7a6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:13 [async_llm.py:261] Added request cmpl-7320c04c826241b7aece756d91bd7a6a-0.
INFO 03-02 00:53:14 [logger.py:42] Received request cmpl-94fc651a0cb7487193dcfee45ea54d2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:14 [async_llm.py:261] Added request cmpl-94fc651a0cb7487193dcfee45ea54d2f-0.
INFO 03-02 00:53:15 [logger.py:42] Received request cmpl-a07cac23bde94133875b0edad67526d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:15 [async_llm.py:261] Added request cmpl-a07cac23bde94133875b0edad67526d0-0.
INFO 03-02 00:53:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:16 [logger.py:42] Received request cmpl-03ff297080f94500833c0b1d6ee259ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:16 [async_llm.py:261] Added request cmpl-03ff297080f94500833c0b1d6ee259ae-0.
INFO 03-02 00:53:17 [logger.py:42] Received request cmpl-cf2ac2e86a1240bc9264a9eca4a9ffdb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:17 [async_llm.py:261] Added request cmpl-cf2ac2e86a1240bc9264a9eca4a9ffdb-0.
INFO 03-02 00:53:18 [logger.py:42] Received request cmpl-e2c8fb0999d04359881358fab050ed14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:18 [async_llm.py:261] Added request cmpl-e2c8fb0999d04359881358fab050ed14-0.
INFO 03-02 00:53:19 [logger.py:42] Received request cmpl-b6525d33ffe14a47b753110f7db11ef8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:19 [async_llm.py:261] Added request cmpl-b6525d33ffe14a47b753110f7db11ef8-0.
INFO 03-02 00:53:20 [logger.py:42] Received request cmpl-a0315b38dc454bd6b177bf83251334e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:20 [async_llm.py:261] Added request cmpl-a0315b38dc454bd6b177bf83251334e0-0.
INFO 03-02 00:53:21 [logger.py:42] Received request cmpl-0c1438ac8bc2435fa6c06630f9ad5a9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:21 [async_llm.py:261] Added request cmpl-0c1438ac8bc2435fa6c06630f9ad5a9c-0.
INFO 03-02 00:53:22 [logger.py:42] Received request cmpl-8e49e3fb7af441bab6a68f49b12c84a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:22 [async_llm.py:261] Added request cmpl-8e49e3fb7af441bab6a68f49b12c84a1-0.
INFO 03-02 00:53:24 [logger.py:42] Received request cmpl-07df4b47f360478c9498c3bfc9ce29d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:24 [async_llm.py:261] Added request cmpl-07df4b47f360478c9498c3bfc9ce29d2-0.
INFO 03-02 00:53:25 [logger.py:42] Received request cmpl-46bb4508a8b54d44bc10962042f33cc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:25 [async_llm.py:261] Added request cmpl-46bb4508a8b54d44bc10962042f33cc6-0.
INFO 03-02 00:53:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:26 [logger.py:42] Received request cmpl-6b3d59902de6415d908feea4eba728d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:26 [async_llm.py:261] Added request cmpl-6b3d59902de6415d908feea4eba728d5-0.
INFO 03-02 00:53:27 [logger.py:42] Received request cmpl-689f75dde6a04ee3ba1fda43549ac906-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:27 [async_llm.py:261] Added request cmpl-689f75dde6a04ee3ba1fda43549ac906-0.
INFO 03-02 00:53:28 [logger.py:42] Received request cmpl-197866f4ed7b4ab3a4c987d8f7754965-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:28 [async_llm.py:261] Added request cmpl-197866f4ed7b4ab3a4c987d8f7754965-0.
INFO 03-02 00:53:29 [logger.py:42] Received request cmpl-b70501f2fad14f809b7c527b9c37fb8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:29 [async_llm.py:261] Added request cmpl-b70501f2fad14f809b7c527b9c37fb8f-0.
INFO 03-02 00:53:30 [logger.py:42] Received request cmpl-9b5783ffec224f3aa05c75d47c3ccb8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:30 [async_llm.py:261] Added request cmpl-9b5783ffec224f3aa05c75d47c3ccb8e-0.
INFO 03-02 00:53:31 [logger.py:42] Received request cmpl-44926984e8054529abc63e5b2476587a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:31 [async_llm.py:261] Added request cmpl-44926984e8054529abc63e5b2476587a-0.
INFO 03-02 00:53:32 [logger.py:42] Received request cmpl-618f23fad21d4993b4c620e156e35ab1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:32 [async_llm.py:261] Added request cmpl-618f23fad21d4993b4c620e156e35ab1-0.
INFO 03-02 00:53:33 [logger.py:42] Received request cmpl-6e87c9c0a22b4e27b79f69be2be4cbcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:33 [async_llm.py:261] Added request cmpl-6e87c9c0a22b4e27b79f69be2be4cbcd-0.
INFO 03-02 00:53:34 [logger.py:42] Received request cmpl-2b08b0963f21463c91efaede18d33dd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:34 [async_llm.py:261] Added request cmpl-2b08b0963f21463c91efaede18d33dd4-0.
INFO 03-02 00:53:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:36 [logger.py:42] Received request cmpl-544701efb5164339b5aab78a4fc3ae8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:36 [async_llm.py:261] Added request cmpl-544701efb5164339b5aab78a4fc3ae8f-0.
INFO 03-02 00:53:37 [logger.py:42] Received request cmpl-ef9f5a486e244534985e5944e61abf06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:37 [async_llm.py:261] Added request cmpl-ef9f5a486e244534985e5944e61abf06-0.
INFO 03-02 00:53:38 [logger.py:42] Received request cmpl-3f71b50c5f034891afb8b33b6d2e5231-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:38 [async_llm.py:261] Added request cmpl-3f71b50c5f034891afb8b33b6d2e5231-0.
INFO 03-02 00:53:39 [logger.py:42] Received request cmpl-8b8949bbd7ca4f56aef962784c038c13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:39 [async_llm.py:261] Added request cmpl-8b8949bbd7ca4f56aef962784c038c13-0.
INFO 03-02 00:53:40 [logger.py:42] Received request cmpl-62787cb2518545edb424fdbe5ea59044-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:40 [async_llm.py:261] Added request cmpl-62787cb2518545edb424fdbe5ea59044-0.
INFO 03-02 00:53:41 [logger.py:42] Received request cmpl-f673ee2cc4ab45a991ba0819cf1680a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:41 [async_llm.py:261] Added request cmpl-f673ee2cc4ab45a991ba0819cf1680a0-0.
INFO 03-02 00:53:42 [logger.py:42] Received request cmpl-7b86dd1ef6e64374bf1562dada097444-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:42 [async_llm.py:261] Added request cmpl-7b86dd1ef6e64374bf1562dada097444-0.
INFO 03-02 00:53:43 [logger.py:42] Received request cmpl-30d8d664ac7447c48da646d4014dea3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:43 [async_llm.py:261] Added request cmpl-30d8d664ac7447c48da646d4014dea3f-0.
INFO 03-02 00:53:44 [logger.py:42] Received request cmpl-dde307b926b840a18b4c745435a666c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:44 [async_llm.py:261] Added request cmpl-dde307b926b840a18b4c745435a666c3-0.
INFO 03-02 00:53:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:45 [logger.py:42] Received request cmpl-a6e063b0b270484999414e9bed33c4f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:45 [async_llm.py:261] Added request cmpl-a6e063b0b270484999414e9bed33c4f8-0.
INFO 03-02 00:53:47 [logger.py:42] Received request cmpl-d9e0b8e23de848b897e1b2748f696363-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:47 [async_llm.py:261] Added request cmpl-d9e0b8e23de848b897e1b2748f696363-0.
INFO 03-02 00:53:48 [logger.py:42] Received request cmpl-054c0e23b48449ff8180564a5799bd0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:48 [async_llm.py:261] Added request cmpl-054c0e23b48449ff8180564a5799bd0d-0.
INFO 03-02 00:53:49 [logger.py:42] Received request cmpl-eab6c7bb76bc469583e3a4d880592d2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:49 [async_llm.py:261] Added request cmpl-eab6c7bb76bc469583e3a4d880592d2f-0.
INFO 03-02 00:53:50 [logger.py:42] Received request cmpl-8326a6d357cd4fdfab553503ac32fd9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:50 [async_llm.py:261] Added request cmpl-8326a6d357cd4fdfab553503ac32fd9f-0.
INFO 03-02 00:53:51 [logger.py:42] Received request cmpl-73696ee9afd14e5f8e8fbca14695c306-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:51 [async_llm.py:261] Added request cmpl-73696ee9afd14e5f8e8fbca14695c306-0.
INFO 03-02 00:53:52 [logger.py:42] Received request cmpl-feb5d3a828f94ba782c9461e5782eccd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:52 [async_llm.py:261] Added request cmpl-feb5d3a828f94ba782c9461e5782eccd-0.
INFO 03-02 00:53:53 [logger.py:42] Received request cmpl-53a9663b8f724b7c8f63bc22e546ee7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:53 [async_llm.py:261] Added request cmpl-53a9663b8f724b7c8f63bc22e546ee7f-0.
INFO 03-02 00:53:54 [logger.py:42] Received request cmpl-bf5668c9097a4d859acf6d3dbd1302f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:54 [async_llm.py:261] Added request cmpl-bf5668c9097a4d859acf6d3dbd1302f7-0.
INFO 03-02 00:53:55 [logger.py:42] Received request cmpl-db54e4fb9e2c4c838e5ea24d740f2441-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:55 [async_llm.py:261] Added request cmpl-db54e4fb9e2c4c838e5ea24d740f2441-0.
INFO 03-02 00:53:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:53:56 [logger.py:42] Received request cmpl-a9f24f7cc1454c788dd1f06a7c5bb308-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:56 [async_llm.py:261] Added request cmpl-a9f24f7cc1454c788dd1f06a7c5bb308-0.
INFO 03-02 00:53:57 [logger.py:42] Received request cmpl-d0e3787b888c4adaad5b2100f679f2a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:57 [async_llm.py:261] Added request cmpl-d0e3787b888c4adaad5b2100f679f2a0-0.
INFO 03-02 00:53:59 [logger.py:42] Received request cmpl-f4125901737044c7b135a38ecee3c133-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:53:59 [async_llm.py:261] Added request cmpl-f4125901737044c7b135a38ecee3c133-0.
INFO 03-02 00:54:00 [logger.py:42] Received request cmpl-9c0b3143886f4bf981b7547afe4a39ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:00 [async_llm.py:261] Added request cmpl-9c0b3143886f4bf981b7547afe4a39ef-0.
INFO 03-02 00:54:01 [logger.py:42] Received request cmpl-532bf509f4674be58763f7e08c2cd78e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:01 [async_llm.py:261] Added request cmpl-532bf509f4674be58763f7e08c2cd78e-0.
INFO 03-02 00:54:02 [logger.py:42] Received request cmpl-7920397abfa441ef898daa803003fee2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:02 [async_llm.py:261] Added request cmpl-7920397abfa441ef898daa803003fee2-0.
INFO 03-02 00:54:03 [logger.py:42] Received request cmpl-21476333e0ff48ba80614968d5db0e7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:03 [async_llm.py:261] Added request cmpl-21476333e0ff48ba80614968d5db0e7b-0.
INFO 03-02 00:54:04 [logger.py:42] Received request cmpl-0339f06196fc4b78a92824ea7fbb8e12-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:04 [async_llm.py:261] Added request cmpl-0339f06196fc4b78a92824ea7fbb8e12-0.
INFO 03-02 00:54:05 [logger.py:42] Received request cmpl-0f7dbfe5cac64d038b5af2cea8bc454c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:05 [async_llm.py:261] Added request cmpl-0f7dbfe5cac64d038b5af2cea8bc454c-0.
INFO 03-02 00:54:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:06 [logger.py:42] Received request cmpl-047a34a239c04674a366d39a2af36d1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:06 [async_llm.py:261] Added request cmpl-047a34a239c04674a366d39a2af36d1c-0.
INFO 03-02 00:54:07 [logger.py:42] Received request cmpl-28143983ffd344f2af041accee354bc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:07 [async_llm.py:261] Added request cmpl-28143983ffd344f2af041accee354bc1-0.
INFO 03-02 00:54:08 [logger.py:42] Received request cmpl-2d61ac854c9c48c99c2659d6e6bb35db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:08 [async_llm.py:261] Added request cmpl-2d61ac854c9c48c99c2659d6e6bb35db-0.
INFO 03-02 00:54:10 [logger.py:42] Received request cmpl-fbb09f98a0954ae3ba5903b2b1d85d43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:10 [async_llm.py:261] Added request cmpl-fbb09f98a0954ae3ba5903b2b1d85d43-0.
INFO 03-02 00:54:11 [logger.py:42] Received request cmpl-efadfc32b0044be3a27df8f2f6c61fa9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:11 [async_llm.py:261] Added request cmpl-efadfc32b0044be3a27df8f2f6c61fa9-0.
INFO 03-02 00:54:12 [logger.py:42] Received request cmpl-b058e83eb2074ef597e7d15d25c7113b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:12 [async_llm.py:261] Added request cmpl-b058e83eb2074ef597e7d15d25c7113b-0.
INFO 03-02 00:54:13 [logger.py:42] Received request cmpl-a3e20647f3104133a71d846c3883dc88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:13 [async_llm.py:261] Added request cmpl-a3e20647f3104133a71d846c3883dc88-0.
INFO 03-02 00:54:14 [logger.py:42] Received request cmpl-e7012f57676542cea4cba2c11d6e8284-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:14 [async_llm.py:261] Added request cmpl-e7012f57676542cea4cba2c11d6e8284-0.
INFO 03-02 00:54:15 [logger.py:42] Received request cmpl-3cbb6a68c57246748cc85b8365c75b8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:15 [async_llm.py:261] Added request cmpl-3cbb6a68c57246748cc85b8365c75b8e-0.
INFO 03-02 00:54:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:16 [logger.py:42] Received request cmpl-4957644fc6f344309654f15c725b2b6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:16 [async_llm.py:261] Added request cmpl-4957644fc6f344309654f15c725b2b6c-0.
INFO 03-02 00:54:17 [logger.py:42] Received request cmpl-777d1539e1b6408ea217a60100413f4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:17 [async_llm.py:261] Added request cmpl-777d1539e1b6408ea217a60100413f4e-0.
INFO 03-02 00:54:18 [logger.py:42] Received request cmpl-a88c774dceb14228be7cfa69f1a2eba3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:18 [async_llm.py:261] Added request cmpl-a88c774dceb14228be7cfa69f1a2eba3-0.
INFO 03-02 00:54:19 [logger.py:42] Received request cmpl-117259db29d64851ab69999d80aa72c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:19 [async_llm.py:261] Added request cmpl-117259db29d64851ab69999d80aa72c2-0.
INFO 03-02 00:54:20 [logger.py:42] Received request cmpl-cd725126fcf24e7aa0eab47dc2f511c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:20 [async_llm.py:261] Added request cmpl-cd725126fcf24e7aa0eab47dc2f511c6-0.
INFO 03-02 00:54:22 [logger.py:42] Received request cmpl-4cf697daf6f34790aa2780c1055a2f8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:22 [async_llm.py:261] Added request cmpl-4cf697daf6f34790aa2780c1055a2f8c-0.
INFO 03-02 00:54:23 [logger.py:42] Received request cmpl-54aec01fa3d24f8f83480d0845a2a06e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:23 [async_llm.py:261] Added request cmpl-54aec01fa3d24f8f83480d0845a2a06e-0.
INFO 03-02 00:54:24 [logger.py:42] Received request cmpl-b5f9db0b4cc14bcc80fdb5f365d5f81c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:24 [async_llm.py:261] Added request cmpl-b5f9db0b4cc14bcc80fdb5f365d5f81c-0.
INFO 03-02 00:54:25 [logger.py:42] Received request cmpl-85bcc0ed7b424be2b5cf63ca6182518e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:25 [async_llm.py:261] Added request cmpl-85bcc0ed7b424be2b5cf63ca6182518e-0.
INFO 03-02 00:54:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:26 [logger.py:42] Received request cmpl-c0298ff8cc154270a83547a775368942-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:26 [async_llm.py:261] Added request cmpl-c0298ff8cc154270a83547a775368942-0.
INFO 03-02 00:54:27 [logger.py:42] Received request cmpl-b8fa89a7899246e78ee36fd108c226ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:27 [async_llm.py:261] Added request cmpl-b8fa89a7899246e78ee36fd108c226ff-0.
INFO 03-02 00:54:28 [logger.py:42] Received request cmpl-2cf7caa09812441d9172cd3c12566695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:28 [async_llm.py:261] Added request cmpl-2cf7caa09812441d9172cd3c12566695-0.
INFO 03-02 00:54:29 [logger.py:42] Received request cmpl-c2fbda1636b94e2c8defe040148cf573-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:29 [async_llm.py:261] Added request cmpl-c2fbda1636b94e2c8defe040148cf573-0.
INFO 03-02 00:54:30 [logger.py:42] Received request cmpl-9bb8548b715943579807824bb6f57660-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:30 [async_llm.py:261] Added request cmpl-9bb8548b715943579807824bb6f57660-0.
INFO 03-02 00:54:31 [logger.py:42] Received request cmpl-d7730163a7c3467a8968fb690fc4c269-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:31 [async_llm.py:261] Added request cmpl-d7730163a7c3467a8968fb690fc4c269-0.
INFO 03-02 00:54:33 [logger.py:42] Received request cmpl-000e1f7a75474fd4ba742a8532ae2aca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:33 [async_llm.py:261] Added request cmpl-000e1f7a75474fd4ba742a8532ae2aca-0.
INFO 03-02 00:54:34 [logger.py:42] Received request cmpl-34d0941be39b4c98ace2c29fcfe99138-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:34 [async_llm.py:261] Added request cmpl-34d0941be39b4c98ace2c29fcfe99138-0.
INFO 03-02 00:54:35 [logger.py:42] Received request cmpl-ba0389651a5244e08b451835a3d8216b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:35 [async_llm.py:261] Added request cmpl-ba0389651a5244e08b451835a3d8216b-0.
INFO 03-02 00:54:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:36 [logger.py:42] Received request cmpl-8d769a17d3e74e9ea17c508b29ee0a00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:36 [async_llm.py:261] Added request cmpl-8d769a17d3e74e9ea17c508b29ee0a00-0.
INFO 03-02 00:54:37 [logger.py:42] Received request cmpl-a18bee72e7b5454aa4e834baaf8bb416-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:37 [async_llm.py:261] Added request cmpl-a18bee72e7b5454aa4e834baaf8bb416-0.
INFO 03-02 00:54:38 [logger.py:42] Received request cmpl-93e4c15232594825ab0ca878ce2c8a93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:38 [async_llm.py:261] Added request cmpl-93e4c15232594825ab0ca878ce2c8a93-0.
INFO 03-02 00:54:39 [logger.py:42] Received request cmpl-40efeb974e514b86bff6785b998712f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:39 [async_llm.py:261] Added request cmpl-40efeb974e514b86bff6785b998712f6-0.
INFO 03-02 00:54:40 [logger.py:42] Received request cmpl-f67ab6eae3d647489c242f30d5a18ccc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:40 [async_llm.py:261] Added request cmpl-f67ab6eae3d647489c242f30d5a18ccc-0.
INFO 03-02 00:54:41 [logger.py:42] Received request cmpl-0e7e956945ec4627b48a324855179d01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:41 [async_llm.py:261] Added request cmpl-0e7e956945ec4627b48a324855179d01-0.
INFO 03-02 00:54:42 [logger.py:42] Received request cmpl-90dc7701bcfd4b9ca202124c5c814ca4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:42 [async_llm.py:261] Added request cmpl-90dc7701bcfd4b9ca202124c5c814ca4-0.
INFO 03-02 00:54:43 [logger.py:42] Received request cmpl-ec14a92d769748b7ace81a9a5e080d22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:43 [async_llm.py:261] Added request cmpl-ec14a92d769748b7ace81a9a5e080d22-0.
INFO 03-02 00:54:45 [logger.py:42] Received request cmpl-c75a8a55339048f89ca6a81200d68682-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:45 [async_llm.py:261] Added request cmpl-c75a8a55339048f89ca6a81200d68682-0.
INFO 03-02 00:54:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:46 [logger.py:42] Received request cmpl-b02814d88a534eb4b18cfb254a507dbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:46 [async_llm.py:261] Added request cmpl-b02814d88a534eb4b18cfb254a507dbc-0.
INFO 03-02 00:54:47 [logger.py:42] Received request cmpl-6022202a0eeb46a4ae4a559e92e6ba5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:47 [async_llm.py:261] Added request cmpl-6022202a0eeb46a4ae4a559e92e6ba5b-0.
INFO 03-02 00:54:48 [logger.py:42] Received request cmpl-e0cae93222f24e63ad13e28add761b41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:48 [async_llm.py:261] Added request cmpl-e0cae93222f24e63ad13e28add761b41-0.
INFO 03-02 00:54:49 [logger.py:42] Received request cmpl-e53c1c678bf64da9a0e214f41d3cb8cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:49 [async_llm.py:261] Added request cmpl-e53c1c678bf64da9a0e214f41d3cb8cf-0.
INFO 03-02 00:54:50 [logger.py:42] Received request cmpl-4897189534dc45f395cd5d6e69a23d54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:50 [async_llm.py:261] Added request cmpl-4897189534dc45f395cd5d6e69a23d54-0.
INFO 03-02 00:54:51 [logger.py:42] Received request cmpl-0a0f7647991747bf99d852c93aae38bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:51 [async_llm.py:261] Added request cmpl-0a0f7647991747bf99d852c93aae38bb-0.
INFO 03-02 00:54:52 [logger.py:42] Received request cmpl-fef811c8416d42708ed8c263d80a064d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:52 [async_llm.py:261] Added request cmpl-fef811c8416d42708ed8c263d80a064d-0.
INFO 03-02 00:54:53 [logger.py:42] Received request cmpl-d27a4876a70440d2a03e5ec35b744d4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:53 [async_llm.py:261] Added request cmpl-d27a4876a70440d2a03e5ec35b744d4c-0.
INFO 03-02 00:54:54 [logger.py:42] Received request cmpl-a25967646ad1421eb84eb3d57005f14d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:54 [async_llm.py:261] Added request cmpl-a25967646ad1421eb84eb3d57005f14d-0.
INFO 03-02 00:54:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:54:56 [logger.py:42] Received request cmpl-3a91b4053e59431bae1b0ed27a8c3fbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:56 [async_llm.py:261] Added request cmpl-3a91b4053e59431bae1b0ed27a8c3fbc-0.
INFO 03-02 00:54:57 [logger.py:42] Received request cmpl-5835be603eae41d1839604846db8c25c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:57 [async_llm.py:261] Added request cmpl-5835be603eae41d1839604846db8c25c-0.
INFO 03-02 00:54:58 [logger.py:42] Received request cmpl-2757f7ef39ce4b9dba70fdbedd6975c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:58 [async_llm.py:261] Added request cmpl-2757f7ef39ce4b9dba70fdbedd6975c7-0.
INFO 03-02 00:54:59 [logger.py:42] Received request cmpl-1a166eb248504c8c8bfefe7e1c826bb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:54:59 [async_llm.py:261] Added request cmpl-1a166eb248504c8c8bfefe7e1c826bb0-0.
INFO 03-02 00:55:00 [logger.py:42] Received request cmpl-ddc7948ebb6f48c4bf8359978320c337-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:00 [async_llm.py:261] Added request cmpl-ddc7948ebb6f48c4bf8359978320c337-0.
INFO 03-02 00:55:01 [logger.py:42] Received request cmpl-9419c671b6724c9f84115e64d0edabb3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:01 [async_llm.py:261] Added request cmpl-9419c671b6724c9f84115e64d0edabb3-0.
INFO 03-02 00:55:02 [logger.py:42] Received request cmpl-3d33cb4db7e940e7bed12e288c591ad7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:02 [async_llm.py:261] Added request cmpl-3d33cb4db7e940e7bed12e288c591ad7-0.
INFO 03-02 00:55:03 [logger.py:42] Received request cmpl-bb4d4efbb8594200b5814d7fadea8d68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:03 [async_llm.py:261] Added request cmpl-bb4d4efbb8594200b5814d7fadea8d68-0.
INFO 03-02 00:55:04 [logger.py:42] Received request cmpl-2ecf7934a46a41cab2dab016864ab163-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:04 [async_llm.py:261] Added request cmpl-2ecf7934a46a41cab2dab016864ab163-0.
INFO 03-02 00:55:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:05 [logger.py:42] Received request cmpl-0ef0e35669e94c9f8bb84f84e2c834e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:05 [async_llm.py:261] Added request cmpl-0ef0e35669e94c9f8bb84f84e2c834e0-0.
INFO 03-02 00:55:06 [logger.py:42] Received request cmpl-d67e11b5ad4a4f15832ed606c4ac5820-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:06 [async_llm.py:261] Added request cmpl-d67e11b5ad4a4f15832ed606c4ac5820-0.
INFO 03-02 00:55:08 [logger.py:42] Received request cmpl-30618168fa634c868a3f6e36fe7a62a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:08 [async_llm.py:261] Added request cmpl-30618168fa634c868a3f6e36fe7a62a2-0.
INFO 03-02 00:55:09 [logger.py:42] Received request cmpl-a3947c80155b4d1fbeb9eda2056bf9d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:09 [async_llm.py:261] Added request cmpl-a3947c80155b4d1fbeb9eda2056bf9d7-0.
INFO 03-02 00:55:10 [logger.py:42] Received request cmpl-1712eb356cc041ce9cc337d507066e71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:10 [async_llm.py:261] Added request cmpl-1712eb356cc041ce9cc337d507066e71-0.
INFO 03-02 00:55:11 [logger.py:42] Received request cmpl-980ff44419ae4b1ca23bd1525dde57bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:11 [async_llm.py:261] Added request cmpl-980ff44419ae4b1ca23bd1525dde57bb-0.
INFO 03-02 00:55:12 [logger.py:42] Received request cmpl-16ecf7a78f494b3b842a445249241d43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:12 [async_llm.py:261] Added request cmpl-16ecf7a78f494b3b842a445249241d43-0.
INFO 03-02 00:55:13 [logger.py:42] Received request cmpl-dc12df702df24ae2a1d6594ad84472b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:13 [async_llm.py:261] Added request cmpl-dc12df702df24ae2a1d6594ad84472b4-0.
INFO 03-02 00:55:14 [logger.py:42] Received request cmpl-2723781d189642069e8e70eb88454886-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:14 [async_llm.py:261] Added request cmpl-2723781d189642069e8e70eb88454886-0.
INFO 03-02 00:55:15 [logger.py:42] Received request cmpl-58f59b2a4d124982985cdcc103855559-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:15 [async_llm.py:261] Added request cmpl-58f59b2a4d124982985cdcc103855559-0.
INFO 03-02 00:55:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:16 [logger.py:42] Received request cmpl-d6a52c441e6544689f4ab36aff03a36b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:16 [async_llm.py:261] Added request cmpl-d6a52c441e6544689f4ab36aff03a36b-0.
INFO 03-02 00:55:17 [logger.py:42] Received request cmpl-96cfa5e0b77648b6a56a557aa80ee49d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:17 [async_llm.py:261] Added request cmpl-96cfa5e0b77648b6a56a557aa80ee49d-0.
INFO 03-02 00:55:19 [logger.py:42] Received request cmpl-b207abe693124fc6a42f19c1367c2add-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:19 [async_llm.py:261] Added request cmpl-b207abe693124fc6a42f19c1367c2add-0.
INFO 03-02 00:55:20 [logger.py:42] Received request cmpl-b56c3f8df75a49cfb91d7354422a2c02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:20 [async_llm.py:261] Added request cmpl-b56c3f8df75a49cfb91d7354422a2c02-0.
INFO 03-02 00:55:21 [logger.py:42] Received request cmpl-9bfff9e043514c5daff7772fe98297ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:21 [async_llm.py:261] Added request cmpl-9bfff9e043514c5daff7772fe98297ed-0.
INFO 03-02 00:55:22 [logger.py:42] Received request cmpl-9ed5f05fa3474089b5cd191dfc6cdf5b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:22 [async_llm.py:261] Added request cmpl-9ed5f05fa3474089b5cd191dfc6cdf5b-0.
INFO 03-02 00:55:23 [logger.py:42] Received request cmpl-e2bc1af8a0e1474193d7066bdbca76f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:23 [async_llm.py:261] Added request cmpl-e2bc1af8a0e1474193d7066bdbca76f4-0.
INFO 03-02 00:55:24 [logger.py:42] Received request cmpl-f1a5c6d3c522491394f1b5619e5960b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:24 [async_llm.py:261] Added request cmpl-f1a5c6d3c522491394f1b5619e5960b7-0.
INFO 03-02 00:55:25 [logger.py:42] Received request cmpl-dafe8b4773f244ac8720c4403a3c4fea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:25 [async_llm.py:261] Added request cmpl-dafe8b4773f244ac8720c4403a3c4fea-0.
INFO 03-02 00:55:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:26 [logger.py:42] Received request cmpl-b1fdbd26f3c84a3b8915516ddcf135db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:26 [async_llm.py:261] Added request cmpl-b1fdbd26f3c84a3b8915516ddcf135db-0.
INFO 03-02 00:55:27 [logger.py:42] Received request cmpl-fe8ac54de6d743fc8e47cec19cada7df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:27 [async_llm.py:261] Added request cmpl-fe8ac54de6d743fc8e47cec19cada7df-0.
INFO 03-02 00:55:28 [logger.py:42] Received request cmpl-d622593ed5464316812fffe707151cc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:28 [async_llm.py:261] Added request cmpl-d622593ed5464316812fffe707151cc1-0.
INFO 03-02 00:55:29 [logger.py:42] Received request cmpl-2a84f6d531b14ec792d641a4606df847-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:29 [async_llm.py:261] Added request cmpl-2a84f6d531b14ec792d641a4606df847-0.
INFO 03-02 00:55:31 [logger.py:42] Received request cmpl-19fe9897c8f9456eabf5cf8a2b1286d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:31 [async_llm.py:261] Added request cmpl-19fe9897c8f9456eabf5cf8a2b1286d1-0.
INFO 03-02 00:55:32 [logger.py:42] Received request cmpl-c92d49bfabfb421fab47677544d9c0e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:32 [async_llm.py:261] Added request cmpl-c92d49bfabfb421fab47677544d9c0e5-0.
INFO 03-02 00:55:33 [logger.py:42] Received request cmpl-0042d0ca71794c7fa827788a21aa4249-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:33 [async_llm.py:261] Added request cmpl-0042d0ca71794c7fa827788a21aa4249-0.
INFO 03-02 00:55:34 [logger.py:42] Received request cmpl-fb5a0cc0db754c098d02977ec6202efe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:34 [async_llm.py:261] Added request cmpl-fb5a0cc0db754c098d02977ec6202efe-0.
INFO 03-02 00:55:35 [logger.py:42] Received request cmpl-8151b7ae401b40af97999cd0002098c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:35 [async_llm.py:261] Added request cmpl-8151b7ae401b40af97999cd0002098c1-0.
INFO 03-02 00:55:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:36 [logger.py:42] Received request cmpl-18deac5451734ade8d4e3887f80a622e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:36 [async_llm.py:261] Added request cmpl-18deac5451734ade8d4e3887f80a622e-0.
INFO 03-02 00:55:37 [logger.py:42] Received request cmpl-0843ee8fb93441fcb8240afc86c85082-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:37 [async_llm.py:261] Added request cmpl-0843ee8fb93441fcb8240afc86c85082-0.
INFO 03-02 00:55:38 [logger.py:42] Received request cmpl-e592acc31e8743d49ddc6c51bc02a891-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:38 [async_llm.py:261] Added request cmpl-e592acc31e8743d49ddc6c51bc02a891-0.
INFO 03-02 00:55:39 [logger.py:42] Received request cmpl-f1cc9ce6d5c44200a4f98946baeb1f2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:39 [async_llm.py:261] Added request cmpl-f1cc9ce6d5c44200a4f98946baeb1f2d-0.
INFO 03-02 00:55:40 [logger.py:42] Received request cmpl-f00797b072444f078c72b321cc50440e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:40 [async_llm.py:261] Added request cmpl-f00797b072444f078c72b321cc50440e-0.
INFO 03-02 00:55:42 [logger.py:42] Received request cmpl-66c01f6c076c4dff99844ff8a1b12b2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:42 [async_llm.py:261] Added request cmpl-66c01f6c076c4dff99844ff8a1b12b2c-0.
INFO 03-02 00:55:43 [logger.py:42] Received request cmpl-55118e9fad7847b2b266f5ec9738f38d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:43 [async_llm.py:261] Added request cmpl-55118e9fad7847b2b266f5ec9738f38d-0.
INFO 03-02 00:55:44 [logger.py:42] Received request cmpl-efd7a33e134e40a3976d2a19cacfcb92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:44 [async_llm.py:261] Added request cmpl-efd7a33e134e40a3976d2a19cacfcb92-0.
INFO 03-02 00:55:45 [logger.py:42] Received request cmpl-9d878b47f1114abf973d9ef270f256d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:45 [async_llm.py:261] Added request cmpl-9d878b47f1114abf973d9ef270f256d6-0.
INFO 03-02 00:55:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:46 [logger.py:42] Received request cmpl-aca208a4c3f843b7865bd173778a48ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:46 [async_llm.py:261] Added request cmpl-aca208a4c3f843b7865bd173778a48ef-0.
INFO 03-02 00:55:47 [logger.py:42] Received request cmpl-2d0637dc46684417bca2e6a4918d97ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:47 [async_llm.py:261] Added request cmpl-2d0637dc46684417bca2e6a4918d97ee-0.
INFO 03-02 00:55:48 [logger.py:42] Received request cmpl-58e967626a25457fa49c2342d19106d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:48 [async_llm.py:261] Added request cmpl-58e967626a25457fa49c2342d19106d9-0.
INFO 03-02 00:55:49 [logger.py:42] Received request cmpl-a517c311997749ec921f9965c32b5ff4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:49 [async_llm.py:261] Added request cmpl-a517c311997749ec921f9965c32b5ff4-0.
INFO 03-02 00:55:50 [logger.py:42] Received request cmpl-c086e401606c4fa687e8cce0734492f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:50 [async_llm.py:261] Added request cmpl-c086e401606c4fa687e8cce0734492f7-0.
INFO 03-02 00:55:51 [logger.py:42] Received request cmpl-2569bf2e8fce48108e24235a3b863805-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:51 [async_llm.py:261] Added request cmpl-2569bf2e8fce48108e24235a3b863805-0.
INFO 03-02 00:55:52 [logger.py:42] Received request cmpl-cc080b80a00e4b6f98a7946f7c0c2a13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:52 [async_llm.py:261] Added request cmpl-cc080b80a00e4b6f98a7946f7c0c2a13-0.
INFO 03-02 00:55:54 [logger.py:42] Received request cmpl-75b47331482042a8bcdf02ab3009b09a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:54 [async_llm.py:261] Added request cmpl-75b47331482042a8bcdf02ab3009b09a-0.
INFO 03-02 00:55:55 [logger.py:42] Received request cmpl-6cce8dba34e54036b177bdb9d5c9cdb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:55 [async_llm.py:261] Added request cmpl-6cce8dba34e54036b177bdb9d5c9cdb7-0.
INFO 03-02 00:55:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:55:56 [logger.py:42] Received request cmpl-ebe5ab92acc8428a841fa369f3671a62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:56 [async_llm.py:261] Added request cmpl-ebe5ab92acc8428a841fa369f3671a62-0.
INFO 03-02 00:55:57 [logger.py:42] Received request cmpl-3d65a9e7f2ca478bb9202b1448d558bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:57 [async_llm.py:261] Added request cmpl-3d65a9e7f2ca478bb9202b1448d558bb-0.
INFO 03-02 00:55:58 [logger.py:42] Received request cmpl-9ea18d3762474e638d2ad688432f7712-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:58 [async_llm.py:261] Added request cmpl-9ea18d3762474e638d2ad688432f7712-0.
INFO 03-02 00:55:59 [logger.py:42] Received request cmpl-70d82aed24e04e41b070c5f45f32c085-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:55:59 [async_llm.py:261] Added request cmpl-70d82aed24e04e41b070c5f45f32c085-0.
INFO 03-02 00:56:00 [logger.py:42] Received request cmpl-7e6b8b7c73f04fc39709a9ecbf726ef6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:00 [async_llm.py:261] Added request cmpl-7e6b8b7c73f04fc39709a9ecbf726ef6-0.
INFO 03-02 00:56:01 [logger.py:42] Received request cmpl-170bac4b72694d5da30ffc42d2eac0c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:01 [async_llm.py:261] Added request cmpl-170bac4b72694d5da30ffc42d2eac0c2-0.
INFO 03-02 00:56:02 [logger.py:42] Received request cmpl-5292a65517f1490d8ac3a8c7679f94cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:02 [async_llm.py:261] Added request cmpl-5292a65517f1490d8ac3a8c7679f94cf-0.
INFO 03-02 00:56:03 [logger.py:42] Received request cmpl-786b957136fd433c973514f9544e6157-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:03 [async_llm.py:261] Added request cmpl-786b957136fd433c973514f9544e6157-0.
INFO 03-02 00:56:05 [logger.py:42] Received request cmpl-4e69a17d8d57458d92039df1981e2503-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:05 [async_llm.py:261] Added request cmpl-4e69a17d8d57458d92039df1981e2503-0.
INFO 03-02 00:56:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:06 [logger.py:42] Received request cmpl-831be5876feb4592986183ae8ff49aa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:06 [async_llm.py:261] Added request cmpl-831be5876feb4592986183ae8ff49aa3-0.
INFO 03-02 00:56:07 [logger.py:42] Received request cmpl-c666f95472614250997a198170ebcd64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:07 [async_llm.py:261] Added request cmpl-c666f95472614250997a198170ebcd64-0.
INFO 03-02 00:56:08 [logger.py:42] Received request cmpl-92c799e7a702410eb862f007abbe03af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:08 [async_llm.py:261] Added request cmpl-92c799e7a702410eb862f007abbe03af-0.
INFO 03-02 00:56:09 [logger.py:42] Received request cmpl-a15740c15859493486b907321f6fdc41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:09 [async_llm.py:261] Added request cmpl-a15740c15859493486b907321f6fdc41-0.
INFO 03-02 00:56:10 [logger.py:42] Received request cmpl-27628ec6623745ebb28d49aec65f4399-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:10 [async_llm.py:261] Added request cmpl-27628ec6623745ebb28d49aec65f4399-0.
INFO 03-02 00:56:11 [logger.py:42] Received request cmpl-66245d11517d4de0a30f93a3e7271542-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:11 [async_llm.py:261] Added request cmpl-66245d11517d4de0a30f93a3e7271542-0.
INFO 03-02 00:56:12 [logger.py:42] Received request cmpl-2eede7b6da7a4a5ebabbbd4654faae19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:12 [async_llm.py:261] Added request cmpl-2eede7b6da7a4a5ebabbbd4654faae19-0.
INFO 03-02 00:56:13 [logger.py:42] Received request cmpl-4cc1723c5e714dcaa6fe38c0e6484d24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:13 [async_llm.py:261] Added request cmpl-4cc1723c5e714dcaa6fe38c0e6484d24-0.
INFO 03-02 00:56:14 [logger.py:42] Received request cmpl-2cdc8a632a74496c84606a27f423c2a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:14 [async_llm.py:261] Added request cmpl-2cdc8a632a74496c84606a27f423c2a5-0.
INFO 03-02 00:56:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:15 [logger.py:42] Received request cmpl-7da6c3aa3c304b08aab000551409fe25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:15 [async_llm.py:261] Added request cmpl-7da6c3aa3c304b08aab000551409fe25-0.
INFO 03-02 00:56:17 [logger.py:42] Received request cmpl-be308bb61d7a41cab79d40b4adc49c5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:17 [async_llm.py:261] Added request cmpl-be308bb61d7a41cab79d40b4adc49c5e-0.
INFO 03-02 00:56:18 [logger.py:42] Received request cmpl-0c2fab556ab54e898a8335090b6631c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:18 [async_llm.py:261] Added request cmpl-0c2fab556ab54e898a8335090b6631c0-0.
INFO 03-02 00:56:19 [logger.py:42] Received request cmpl-c16cab741db14458b99103da839ca096-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:19 [async_llm.py:261] Added request cmpl-c16cab741db14458b99103da839ca096-0.
INFO 03-02 00:56:20 [logger.py:42] Received request cmpl-06de0f8bc4834cbe85c09802712de98f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:20 [async_llm.py:261] Added request cmpl-06de0f8bc4834cbe85c09802712de98f-0.
INFO 03-02 00:56:21 [logger.py:42] Received request cmpl-4d48ddf0765d4ddeb40b3418581a1695-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:21 [async_llm.py:261] Added request cmpl-4d48ddf0765d4ddeb40b3418581a1695-0.
INFO 03-02 00:56:22 [logger.py:42] Received request cmpl-0f87825e67004d55806f256aec94ca16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:22 [async_llm.py:261] Added request cmpl-0f87825e67004d55806f256aec94ca16-0.
INFO 03-02 00:56:23 [logger.py:42] Received request cmpl-f0ce5462a2704355b8673e0fa042d808-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:23 [async_llm.py:261] Added request cmpl-f0ce5462a2704355b8673e0fa042d808-0.
INFO 03-02 00:56:24 [logger.py:42] Received request cmpl-1350cb37daf9481990edc2d73450f9da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:24 [async_llm.py:261] Added request cmpl-1350cb37daf9481990edc2d73450f9da-0.
INFO 03-02 00:56:25 [logger.py:42] Received request cmpl-945f0919879640d0b72e87b78b1ff460-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:25 [async_llm.py:261] Added request cmpl-945f0919879640d0b72e87b78b1ff460-0.
INFO 03-02 00:56:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:26 [logger.py:42] Received request cmpl-d08867dcedee4fd0a80e272c474bc0b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:26 [async_llm.py:261] Added request cmpl-d08867dcedee4fd0a80e272c474bc0b2-0.
INFO 03-02 00:56:28 [logger.py:42] Received request cmpl-a687bace6829436583fcaf8574f793f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:28 [async_llm.py:261] Added request cmpl-a687bace6829436583fcaf8574f793f2-0.
INFO 03-02 00:56:29 [logger.py:42] Received request cmpl-724ba7d670fe41579c32efd075917f3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:29 [async_llm.py:261] Added request cmpl-724ba7d670fe41579c32efd075917f3c-0.
INFO 03-02 00:56:30 [logger.py:42] Received request cmpl-151ae46f89794ea2b97b0cb13b3a53ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:30 [async_llm.py:261] Added request cmpl-151ae46f89794ea2b97b0cb13b3a53ed-0.
INFO 03-02 00:56:31 [logger.py:42] Received request cmpl-e1dad841844d48c3b2a78dd083f7392f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:31 [async_llm.py:261] Added request cmpl-e1dad841844d48c3b2a78dd083f7392f-0.
INFO 03-02 00:56:32 [logger.py:42] Received request cmpl-74089d99fe214293bc06d86e7ef72e74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:32 [async_llm.py:261] Added request cmpl-74089d99fe214293bc06d86e7ef72e74-0.
INFO 03-02 00:56:33 [logger.py:42] Received request cmpl-47fee56d703b482ca4c00f58972844f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:33 [async_llm.py:261] Added request cmpl-47fee56d703b482ca4c00f58972844f4-0.
INFO 03-02 00:56:34 [logger.py:42] Received request cmpl-1d1526cf3676439ca38cfe7e84f2b092-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:34 [async_llm.py:261] Added request cmpl-1d1526cf3676439ca38cfe7e84f2b092-0.
INFO 03-02 00:56:35 [logger.py:42] Received request cmpl-bb3964c8da4b4e55b219ab8f350d6d03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:35 [async_llm.py:261] Added request cmpl-bb3964c8da4b4e55b219ab8f350d6d03-0.
INFO 03-02 00:56:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:36 [logger.py:42] Received request cmpl-3f78cf49a4a7416598e0304a9d121d64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:36 [async_llm.py:261] Added request cmpl-3f78cf49a4a7416598e0304a9d121d64-0.
INFO 03-02 00:56:37 [logger.py:42] Received request cmpl-76d67e7927b74ee89ae3cf9de547c7a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:37 [async_llm.py:261] Added request cmpl-76d67e7927b74ee89ae3cf9de547c7a7-0.
INFO 03-02 00:56:38 [logger.py:42] Received request cmpl-a082701872cc4191872210cf0376c3ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:38 [async_llm.py:261] Added request cmpl-a082701872cc4191872210cf0376c3ad-0.
INFO 03-02 00:56:40 [logger.py:42] Received request cmpl-4b9184b4e7f44afea6189165249f8cc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:40 [async_llm.py:261] Added request cmpl-4b9184b4e7f44afea6189165249f8cc6-0.
INFO 03-02 00:56:41 [logger.py:42] Received request cmpl-06857bd99cd34cf29d7b0d46ec384aa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:41 [async_llm.py:261] Added request cmpl-06857bd99cd34cf29d7b0d46ec384aa1-0.
INFO 03-02 00:56:42 [logger.py:42] Received request cmpl-72729e02571d4f8a8707a2d09ff7f933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:42 [async_llm.py:261] Added request cmpl-72729e02571d4f8a8707a2d09ff7f933-0.
INFO 03-02 00:56:43 [logger.py:42] Received request cmpl-13c5aaee6dda487c9217c0a909e10a30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:43 [async_llm.py:261] Added request cmpl-13c5aaee6dda487c9217c0a909e10a30-0.
INFO 03-02 00:56:44 [logger.py:42] Received request cmpl-232024ad72e94bbcbdb70a2e643ea8af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:44 [async_llm.py:261] Added request cmpl-232024ad72e94bbcbdb70a2e643ea8af-0.
INFO 03-02 00:56:45 [logger.py:42] Received request cmpl-dac8a8c27096416d8bd5d009f159b3ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:45 [async_llm.py:261] Added request cmpl-dac8a8c27096416d8bd5d009f159b3ff-0.
INFO 03-02 00:56:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:46 [logger.py:42] Received request cmpl-a053f2b52b9248afad0cc400e92b7792-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:46 [async_llm.py:261] Added request cmpl-a053f2b52b9248afad0cc400e92b7792-0.
INFO 03-02 00:56:47 [logger.py:42] Received request cmpl-5cb77a919c73446696ccf325445ff28f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:47 [async_llm.py:261] Added request cmpl-5cb77a919c73446696ccf325445ff28f-0.
INFO 03-02 00:56:48 [logger.py:42] Received request cmpl-ee9f00e7b8514404bd5b470fcb7c9725-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:48 [async_llm.py:261] Added request cmpl-ee9f00e7b8514404bd5b470fcb7c9725-0.
INFO 03-02 00:56:49 [logger.py:42] Received request cmpl-65385272353c45d58b603a7e8cb21d1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:49 [async_llm.py:261] Added request cmpl-65385272353c45d58b603a7e8cb21d1b-0.
INFO 03-02 00:56:51 [logger.py:42] Received request cmpl-3f56a5505b78415ebbad13947b88941f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:51 [async_llm.py:261] Added request cmpl-3f56a5505b78415ebbad13947b88941f-0.
INFO 03-02 00:56:52 [logger.py:42] Received request cmpl-8f117308c7124d9f89db09b4d60af6e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:52 [async_llm.py:261] Added request cmpl-8f117308c7124d9f89db09b4d60af6e2-0.
INFO 03-02 00:56:53 [logger.py:42] Received request cmpl-535409f15b9f4ff39c5fc845b02053bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:53 [async_llm.py:261] Added request cmpl-535409f15b9f4ff39c5fc845b02053bb-0.
INFO 03-02 00:56:54 [logger.py:42] Received request cmpl-04287d8970fe433da362f0096a7c1019-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:54 [async_llm.py:261] Added request cmpl-04287d8970fe433da362f0096a7c1019-0.
INFO 03-02 00:56:55 [logger.py:42] Received request cmpl-7bd4788876bb4a71889e10f48b0c5412-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:55 [async_llm.py:261] Added request cmpl-7bd4788876bb4a71889e10f48b0c5412-0.
INFO 03-02 00:56:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:56:56 [logger.py:42] Received request cmpl-862741774e3c4481b998f2dedee3637c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:56 [async_llm.py:261] Added request cmpl-862741774e3c4481b998f2dedee3637c-0.
INFO 03-02 00:56:57 [logger.py:42] Received request cmpl-08bb9d167a1840cc9f597a140d7bf919-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:57 [async_llm.py:261] Added request cmpl-08bb9d167a1840cc9f597a140d7bf919-0.
INFO 03-02 00:56:58 [logger.py:42] Received request cmpl-c5fe1ecf04cf4e9ba869252096214db9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:58 [async_llm.py:261] Added request cmpl-c5fe1ecf04cf4e9ba869252096214db9-0.
INFO 03-02 00:56:59 [logger.py:42] Received request cmpl-bb608354c23f47efb675d80bbe0c216c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:56:59 [async_llm.py:261] Added request cmpl-bb608354c23f47efb675d80bbe0c216c-0.
INFO 03-02 00:57:00 [logger.py:42] Received request cmpl-e371ec260d9c42c398c21527be045119-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:00 [async_llm.py:261] Added request cmpl-e371ec260d9c42c398c21527be045119-0.
INFO 03-02 00:57:01 [logger.py:42] Received request cmpl-a47a968360414a03b38f1797b9acb207-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:01 [async_llm.py:261] Added request cmpl-a47a968360414a03b38f1797b9acb207-0.
INFO 03-02 00:57:03 [logger.py:42] Received request cmpl-9b49654656e64ebaafb2cee2e75cd21d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:03 [async_llm.py:261] Added request cmpl-9b49654656e64ebaafb2cee2e75cd21d-0.
INFO 03-02 00:57:04 [logger.py:42] Received request cmpl-9b9633039edc494f8aa3858352e15b76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:04 [async_llm.py:261] Added request cmpl-9b9633039edc494f8aa3858352e15b76-0.
INFO 03-02 00:57:05 [logger.py:42] Received request cmpl-abbc295219fa405ca1bd851f52a7c827-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:05 [async_llm.py:261] Added request cmpl-abbc295219fa405ca1bd851f52a7c827-0.
INFO 03-02 00:57:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:06 [logger.py:42] Received request cmpl-50e5ce7ae87b424d8e6c90651e6c876d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:06 [async_llm.py:261] Added request cmpl-50e5ce7ae87b424d8e6c90651e6c876d-0.
INFO 03-02 00:57:07 [logger.py:42] Received request cmpl-a4c9c65a6c21466aa07984475bc30d9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:07 [async_llm.py:261] Added request cmpl-a4c9c65a6c21466aa07984475bc30d9b-0.
INFO 03-02 00:57:08 [logger.py:42] Received request cmpl-a97d309d02854f4fb5f914ed33978c23-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:08 [async_llm.py:261] Added request cmpl-a97d309d02854f4fb5f914ed33978c23-0.
INFO 03-02 00:57:09 [logger.py:42] Received request cmpl-80c0d8073748471aa2988291e0edff1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:09 [async_llm.py:261] Added request cmpl-80c0d8073748471aa2988291e0edff1f-0.
INFO 03-02 00:57:10 [logger.py:42] Received request cmpl-0522f6072f66403e9c05cae7e3d1f1d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:10 [async_llm.py:261] Added request cmpl-0522f6072f66403e9c05cae7e3d1f1d1-0.
INFO 03-02 00:57:11 [logger.py:42] Received request cmpl-2df74f1eca5c46cb931a357e8348c5de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:11 [async_llm.py:261] Added request cmpl-2df74f1eca5c46cb931a357e8348c5de-0.
INFO 03-02 00:57:12 [logger.py:42] Received request cmpl-d77bb8e4bc27423f972f25feedafb5ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:12 [async_llm.py:261] Added request cmpl-d77bb8e4bc27423f972f25feedafb5ce-0.
INFO 03-02 00:57:14 [logger.py:42] Received request cmpl-e49bbbfadb1347fe86ce945904ed46c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:14 [async_llm.py:261] Added request cmpl-e49bbbfadb1347fe86ce945904ed46c6-0.
INFO 03-02 00:57:15 [logger.py:42] Received request cmpl-e699d098b64a49f4bb04a399fdbaf05c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:15 [async_llm.py:261] Added request cmpl-e699d098b64a49f4bb04a399fdbaf05c-0.
INFO 03-02 00:57:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:16 [logger.py:42] Received request cmpl-19856182ad36475b8641cf8012dffb0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:16 [async_llm.py:261] Added request cmpl-19856182ad36475b8641cf8012dffb0b-0.
INFO 03-02 00:57:17 [logger.py:42] Received request cmpl-f7a7fe4840764d7e94527d0b7f582a00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:17 [async_llm.py:261] Added request cmpl-f7a7fe4840764d7e94527d0b7f582a00-0.
INFO 03-02 00:57:18 [logger.py:42] Received request cmpl-78af83ff28474fd499bd43fa38723986-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:18 [async_llm.py:261] Added request cmpl-78af83ff28474fd499bd43fa38723986-0.
INFO 03-02 00:57:19 [logger.py:42] Received request cmpl-59615d4d078948979efd7648eeff9eea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:19 [async_llm.py:261] Added request cmpl-59615d4d078948979efd7648eeff9eea-0.
INFO 03-02 00:57:20 [logger.py:42] Received request cmpl-ac12822e624d403ca893882595099887-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:20 [async_llm.py:261] Added request cmpl-ac12822e624d403ca893882595099887-0.
INFO 03-02 00:57:21 [logger.py:42] Received request cmpl-f23d7ee5db2f4e9d8313e42af2a2bb37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:21 [async_llm.py:261] Added request cmpl-f23d7ee5db2f4e9d8313e42af2a2bb37-0.
INFO 03-02 00:57:22 [logger.py:42] Received request cmpl-51128354e17044d7a295bcc4cb552010-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:22 [async_llm.py:261] Added request cmpl-51128354e17044d7a295bcc4cb552010-0.
INFO 03-02 00:57:23 [logger.py:42] Received request cmpl-98b08f6c2e50439cbef0f41fb1812ef2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:23 [async_llm.py:261] Added request cmpl-98b08f6c2e50439cbef0f41fb1812ef2-0.
INFO 03-02 00:57:24 [logger.py:42] Received request cmpl-6617e287ab88471d9f5074735de25721-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:24 [async_llm.py:261] Added request cmpl-6617e287ab88471d9f5074735de25721-0.
INFO 03-02 00:57:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:26 [logger.py:42] Received request cmpl-b391529e471b4421a6ddd8234b6990ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:26 [async_llm.py:261] Added request cmpl-b391529e471b4421a6ddd8234b6990ac-0.
INFO 03-02 00:57:27 [logger.py:42] Received request cmpl-1c3d74427b0a4c86803921850580723a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:27 [async_llm.py:261] Added request cmpl-1c3d74427b0a4c86803921850580723a-0.
INFO 03-02 00:57:28 [logger.py:42] Received request cmpl-f19f6c331f5948139a2d4034ab69fc01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:28 [async_llm.py:261] Added request cmpl-f19f6c331f5948139a2d4034ab69fc01-0.
INFO 03-02 00:57:29 [logger.py:42] Received request cmpl-e17a5e982e834da392ea690095d3298d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:29 [async_llm.py:261] Added request cmpl-e17a5e982e834da392ea690095d3298d-0.
INFO 03-02 00:57:30 [logger.py:42] Received request cmpl-4958bf649ab64997bf62e5ca279ea35b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:30 [async_llm.py:261] Added request cmpl-4958bf649ab64997bf62e5ca279ea35b-0.
INFO 03-02 00:57:31 [logger.py:42] Received request cmpl-fc8e127ecc4640f9acde1176cf07b8d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:31 [async_llm.py:261] Added request cmpl-fc8e127ecc4640f9acde1176cf07b8d3-0.
INFO 03-02 00:57:32 [logger.py:42] Received request cmpl-5cc99a626717455094ddf0dd49636939-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:32 [async_llm.py:261] Added request cmpl-5cc99a626717455094ddf0dd49636939-0.
INFO 03-02 00:57:33 [logger.py:42] Received request cmpl-c3c00d10a4b8444783b7e6a68f491933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:33 [async_llm.py:261] Added request cmpl-c3c00d10a4b8444783b7e6a68f491933-0.
INFO 03-02 00:57:34 [logger.py:42] Received request cmpl-1ec87cb710e9455c83a905ce9c49370e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:34 [async_llm.py:261] Added request cmpl-1ec87cb710e9455c83a905ce9c49370e-0.
INFO 03-02 00:57:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:35 [logger.py:42] Received request cmpl-f8e8ab8a0c6440ddb3d6e0cf5a132f75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:35 [async_llm.py:261] Added request cmpl-f8e8ab8a0c6440ddb3d6e0cf5a132f75-0.
INFO 03-02 00:57:37 [logger.py:42] Received request cmpl-c54532bb9cb941989c155da226a6ef86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:37 [async_llm.py:261] Added request cmpl-c54532bb9cb941989c155da226a6ef86-0.
INFO 03-02 00:57:38 [logger.py:42] Received request cmpl-e5c2636844b64763b2f8d9913adaba2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:38 [async_llm.py:261] Added request cmpl-e5c2636844b64763b2f8d9913adaba2f-0.
INFO 03-02 00:57:39 [logger.py:42] Received request cmpl-670c7ddfa6314256bbe06e147f8c9c0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:39 [async_llm.py:261] Added request cmpl-670c7ddfa6314256bbe06e147f8c9c0e-0.
INFO 03-02 00:57:40 [logger.py:42] Received request cmpl-79d1764e1e5e45839941703a98fddbae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:40 [async_llm.py:261] Added request cmpl-79d1764e1e5e45839941703a98fddbae-0.
INFO 03-02 00:57:41 [logger.py:42] Received request cmpl-d1b4f114e51249568b5d453bd2ee9286-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:41 [async_llm.py:261] Added request cmpl-d1b4f114e51249568b5d453bd2ee9286-0.
INFO 03-02 00:57:42 [logger.py:42] Received request cmpl-9d83aa805d6b4a14bb4064475d0ba7ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:42 [async_llm.py:261] Added request cmpl-9d83aa805d6b4a14bb4064475d0ba7ec-0.
INFO 03-02 00:57:43 [logger.py:42] Received request cmpl-be3ed455c3a14d5d8ad2a5c38c5f8afc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:43 [async_llm.py:261] Added request cmpl-be3ed455c3a14d5d8ad2a5c38c5f8afc-0.
INFO 03-02 00:57:44 [logger.py:42] Received request cmpl-da805646bea74582a8993286c0e3e705-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:44 [async_llm.py:261] Added request cmpl-da805646bea74582a8993286c0e3e705-0.
INFO 03-02 00:57:45 [logger.py:42] Received request cmpl-ca8327895a124e4e85dd157842280ef0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:45 [async_llm.py:261] Added request cmpl-ca8327895a124e4e85dd157842280ef0-0.
INFO 03-02 00:57:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:46 [logger.py:42] Received request cmpl-81fa706b783c42c2a4a81ce1f27e78c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:46 [async_llm.py:261] Added request cmpl-81fa706b783c42c2a4a81ce1f27e78c3-0.
INFO 03-02 00:57:47 [logger.py:42] Received request cmpl-b6322606e9a64d2182ef57abd7463a35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:47 [async_llm.py:261] Added request cmpl-b6322606e9a64d2182ef57abd7463a35-0.
INFO 03-02 00:57:49 [logger.py:42] Received request cmpl-fb5a701344b74c048f30a659c0d0a65c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:49 [async_llm.py:261] Added request cmpl-fb5a701344b74c048f30a659c0d0a65c-0.
INFO 03-02 00:57:50 [logger.py:42] Received request cmpl-50ee7200dd714bee968f8b692c638968-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:50 [async_llm.py:261] Added request cmpl-50ee7200dd714bee968f8b692c638968-0.
INFO 03-02 00:57:51 [logger.py:42] Received request cmpl-379166a4c5cc49b8b480120f7fd78d51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:51 [async_llm.py:261] Added request cmpl-379166a4c5cc49b8b480120f7fd78d51-0.
INFO 03-02 00:57:52 [logger.py:42] Received request cmpl-4185039089ea42b9b92465ed4b136690-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:52 [async_llm.py:261] Added request cmpl-4185039089ea42b9b92465ed4b136690-0.
INFO 03-02 00:57:53 [logger.py:42] Received request cmpl-13616ee6f9704825a40f8cd4fa10609c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:53 [async_llm.py:261] Added request cmpl-13616ee6f9704825a40f8cd4fa10609c-0.
INFO 03-02 00:57:54 [logger.py:42] Received request cmpl-d65d04518b0b4209833979ebe17e8814-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:54 [async_llm.py:261] Added request cmpl-d65d04518b0b4209833979ebe17e8814-0.
INFO 03-02 00:57:55 [logger.py:42] Received request cmpl-585250d34e8f461cb257d62dc3c4269f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:55 [async_llm.py:261] Added request cmpl-585250d34e8f461cb257d62dc3c4269f-0.
INFO 03-02 00:57:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:57:56 [logger.py:42] Received request cmpl-dafe94be0f1f48bca12521ecd4cfcc04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:56 [async_llm.py:261] Added request cmpl-dafe94be0f1f48bca12521ecd4cfcc04-0.
INFO 03-02 00:57:57 [logger.py:42] Received request cmpl-29a3e5ff962a40c7b382821706e50e97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:57 [async_llm.py:261] Added request cmpl-29a3e5ff962a40c7b382821706e50e97-0.
INFO 03-02 00:57:58 [logger.py:42] Received request cmpl-5cd730b2f91a4b5d9b3874143abab494-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:57:58 [async_llm.py:261] Added request cmpl-5cd730b2f91a4b5d9b3874143abab494-0.
INFO 03-02 00:58:00 [logger.py:42] Received request cmpl-fa0b9bbec07c4b2785c8668eeaef0942-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:00 [async_llm.py:261] Added request cmpl-fa0b9bbec07c4b2785c8668eeaef0942-0.
INFO 03-02 00:58:01 [logger.py:42] Received request cmpl-6cc248ebd97a4ebc80eddf1da16d645a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:01 [async_llm.py:261] Added request cmpl-6cc248ebd97a4ebc80eddf1da16d645a-0.
INFO 03-02 00:58:02 [logger.py:42] Received request cmpl-136b0742f7de49ecaf611f6a1c9791a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:02 [async_llm.py:261] Added request cmpl-136b0742f7de49ecaf611f6a1c9791a0-0.
INFO 03-02 00:58:03 [logger.py:42] Received request cmpl-bc4baa3282b041d4b73c8d39f48f5c3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:03 [async_llm.py:261] Added request cmpl-bc4baa3282b041d4b73c8d39f48f5c3a-0.
INFO 03-02 00:58:04 [logger.py:42] Received request cmpl-5ce637c2c4df4274bafa5e4bdd47bf6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:04 [async_llm.py:261] Added request cmpl-5ce637c2c4df4274bafa5e4bdd47bf6c-0.
INFO 03-02 00:58:05 [logger.py:42] Received request cmpl-c0be5a5a7191462e9de0231cbbc2968d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:05 [async_llm.py:261] Added request cmpl-c0be5a5a7191462e9de0231cbbc2968d-0.
INFO 03-02 00:58:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:06 [logger.py:42] Received request cmpl-475294088c064b29a19b172b91d158fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:06 [async_llm.py:261] Added request cmpl-475294088c064b29a19b172b91d158fe-0.
INFO 03-02 00:58:07 [logger.py:42] Received request cmpl-d621a86ba2264e05b0ec5ee4cab699da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:07 [async_llm.py:261] Added request cmpl-d621a86ba2264e05b0ec5ee4cab699da-0.
INFO 03-02 00:58:08 [logger.py:42] Received request cmpl-f0c72b1f84b54a9aa91375eb646fad93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:08 [async_llm.py:261] Added request cmpl-f0c72b1f84b54a9aa91375eb646fad93-0.
INFO 03-02 00:58:09 [logger.py:42] Received request cmpl-f100a0693add43c3bfff4fbbd1e701c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:09 [async_llm.py:261] Added request cmpl-f100a0693add43c3bfff4fbbd1e701c1-0.
INFO 03-02 00:58:10 [logger.py:42] Received request cmpl-723e00cc49dc4e3d90c6f53c4619abac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:10 [async_llm.py:261] Added request cmpl-723e00cc49dc4e3d90c6f53c4619abac-0.
INFO 03-02 00:58:12 [logger.py:42] Received request cmpl-fa163bc975f24aa9865a13ee97388972-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:12 [async_llm.py:261] Added request cmpl-fa163bc975f24aa9865a13ee97388972-0.
INFO 03-02 00:58:13 [logger.py:42] Received request cmpl-0ba7911e696b4497959e30f37344e49d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:13 [async_llm.py:261] Added request cmpl-0ba7911e696b4497959e30f37344e49d-0.
INFO 03-02 00:58:14 [logger.py:42] Received request cmpl-cc5e70fcbb714e249200f8fd643b5d0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:14 [async_llm.py:261] Added request cmpl-cc5e70fcbb714e249200f8fd643b5d0c-0.
INFO 03-02 00:58:15 [logger.py:42] Received request cmpl-e8e02594a92341d6999411349aeb14f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:15 [async_llm.py:261] Added request cmpl-e8e02594a92341d6999411349aeb14f8-0.
INFO 03-02 00:58:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:16 [logger.py:42] Received request cmpl-a717a6ee43984d81867b81ba4d89dd94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:16 [async_llm.py:261] Added request cmpl-a717a6ee43984d81867b81ba4d89dd94-0.
INFO 03-02 00:58:17 [logger.py:42] Received request cmpl-c65685be753642f9b6e1f0496f2a3d42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:17 [async_llm.py:261] Added request cmpl-c65685be753642f9b6e1f0496f2a3d42-0.
INFO 03-02 00:58:18 [logger.py:42] Received request cmpl-4b0de665d53641bda5decf305d942275-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:18 [async_llm.py:261] Added request cmpl-4b0de665d53641bda5decf305d942275-0.
INFO 03-02 00:58:19 [logger.py:42] Received request cmpl-1313ddd1d28a426e8bb66b97677b8204-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:19 [async_llm.py:261] Added request cmpl-1313ddd1d28a426e8bb66b97677b8204-0.
INFO 03-02 00:58:20 [logger.py:42] Received request cmpl-11c616a6fd7b42df97a7eb9571541216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:20 [async_llm.py:261] Added request cmpl-11c616a6fd7b42df97a7eb9571541216-0.
INFO 03-02 00:58:21 [logger.py:42] Received request cmpl-27493fc95d30452b8e43fa481b02ebfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:21 [async_llm.py:261] Added request cmpl-27493fc95d30452b8e43fa481b02ebfa-0.
INFO 03-02 00:58:23 [logger.py:42] Received request cmpl-b08cf58c59f14b56a1b0e4d4ce31e948-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:23 [async_llm.py:261] Added request cmpl-b08cf58c59f14b56a1b0e4d4ce31e948-0.
INFO 03-02 00:58:24 [logger.py:42] Received request cmpl-e7a1082aa9354cee95e3adde2cb1ff4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:24 [async_llm.py:261] Added request cmpl-e7a1082aa9354cee95e3adde2cb1ff4b-0.
INFO 03-02 00:58:25 [logger.py:42] Received request cmpl-4c1d3533ef614869bdd54a42bd4ab288-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:25 [async_llm.py:261] Added request cmpl-4c1d3533ef614869bdd54a42bd4ab288-0.
INFO 03-02 00:58:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:26 [logger.py:42] Received request cmpl-0f4315d8c7b540c196d859e74f87c6dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:26 [async_llm.py:261] Added request cmpl-0f4315d8c7b540c196d859e74f87c6dc-0.
INFO 03-02 00:58:27 [logger.py:42] Received request cmpl-143d4a423bac4758a5b1585af12b1a63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:27 [async_llm.py:261] Added request cmpl-143d4a423bac4758a5b1585af12b1a63-0.
INFO 03-02 00:58:28 [logger.py:42] Received request cmpl-bd07f9bed4bd40e9b25aee6d5155125b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:28 [async_llm.py:261] Added request cmpl-bd07f9bed4bd40e9b25aee6d5155125b-0.
INFO 03-02 00:58:29 [logger.py:42] Received request cmpl-27724c186fa8475ba17444269651c961-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:29 [async_llm.py:261] Added request cmpl-27724c186fa8475ba17444269651c961-0.
INFO 03-02 00:58:30 [logger.py:42] Received request cmpl-7cd3390cda604fff849873f1a26d7fab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:30 [async_llm.py:261] Added request cmpl-7cd3390cda604fff849873f1a26d7fab-0.
INFO 03-02 00:58:31 [logger.py:42] Received request cmpl-a2f03f510cdf4561ae7d3fa4fd706357-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:31 [async_llm.py:261] Added request cmpl-a2f03f510cdf4561ae7d3fa4fd706357-0.
INFO 03-02 00:58:32 [logger.py:42] Received request cmpl-548fa66a991e4567beae914e74d057ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:32 [async_llm.py:261] Added request cmpl-548fa66a991e4567beae914e74d057ad-0.
INFO 03-02 00:58:34 [logger.py:42] Received request cmpl-95d807a840b64846a03dccd610f487e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:34 [async_llm.py:261] Added request cmpl-95d807a840b64846a03dccd610f487e4-0.
INFO 03-02 00:58:35 [logger.py:42] Received request cmpl-b57b8b03fe444c5191cf3a2e225c50ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:35 [async_llm.py:261] Added request cmpl-b57b8b03fe444c5191cf3a2e225c50ca-0.
INFO 03-02 00:58:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:36 [logger.py:42] Received request cmpl-dce390f83f224dc9951a8c0c8001957e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:36 [async_llm.py:261] Added request cmpl-dce390f83f224dc9951a8c0c8001957e-0.
INFO 03-02 00:58:37 [logger.py:42] Received request cmpl-95798b1f41fc4c6eb28a65c39a22a0c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:37 [async_llm.py:261] Added request cmpl-95798b1f41fc4c6eb28a65c39a22a0c7-0.
INFO 03-02 00:58:38 [logger.py:42] Received request cmpl-40d6f6d7d5e34e69b65e035dc4cd80a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:38 [async_llm.py:261] Added request cmpl-40d6f6d7d5e34e69b65e035dc4cd80a2-0.
INFO 03-02 00:58:39 [logger.py:42] Received request cmpl-72fbc0976c1943eaa326ca5f3b092f96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:39 [async_llm.py:261] Added request cmpl-72fbc0976c1943eaa326ca5f3b092f96-0.
INFO 03-02 00:58:40 [logger.py:42] Received request cmpl-66737b7a9404490abad48d55fed23d97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:40 [async_llm.py:261] Added request cmpl-66737b7a9404490abad48d55fed23d97-0.
INFO 03-02 00:58:41 [logger.py:42] Received request cmpl-0cbdc0a10c3c4c75aba34b665bd1aabb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:41 [async_llm.py:261] Added request cmpl-0cbdc0a10c3c4c75aba34b665bd1aabb-0.
INFO 03-02 00:58:42 [logger.py:42] Received request cmpl-650912bdc0644084a758ad8fe07b8450-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:42 [async_llm.py:261] Added request cmpl-650912bdc0644084a758ad8fe07b8450-0.
INFO 03-02 00:58:43 [logger.py:42] Received request cmpl-a36e5121398f44129fccc9673353ede1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:43 [async_llm.py:261] Added request cmpl-a36e5121398f44129fccc9673353ede1-0.
INFO 03-02 00:58:44 [logger.py:42] Received request cmpl-3b06f19e40f24abea057e3df27e6f5bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:44 [async_llm.py:261] Added request cmpl-3b06f19e40f24abea057e3df27e6f5bf-0.
INFO 03-02 00:58:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:46 [logger.py:42] Received request cmpl-41fb9b24668447439ff264c2ba83c7d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:46 [async_llm.py:261] Added request cmpl-41fb9b24668447439ff264c2ba83c7d5-0.
INFO 03-02 00:58:47 [logger.py:42] Received request cmpl-19beabdba94e424288d030cf57926ef1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:47 [async_llm.py:261] Added request cmpl-19beabdba94e424288d030cf57926ef1-0.
INFO 03-02 00:58:48 [logger.py:42] Received request cmpl-5f3ca04a15484337ace9d5b8d4d7dcaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:48 [async_llm.py:261] Added request cmpl-5f3ca04a15484337ace9d5b8d4d7dcaa-0.
INFO 03-02 00:58:49 [logger.py:42] Received request cmpl-463a5938a67a4c2895fbccfe8a013516-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:49 [async_llm.py:261] Added request cmpl-463a5938a67a4c2895fbccfe8a013516-0.
INFO 03-02 00:58:50 [logger.py:42] Received request cmpl-ea1d1fb5a1d6426db782583601040bd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:50 [async_llm.py:261] Added request cmpl-ea1d1fb5a1d6426db782583601040bd6-0.
INFO 03-02 00:58:51 [logger.py:42] Received request cmpl-c55776829b9749aab70598f1b91e77e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:51 [async_llm.py:261] Added request cmpl-c55776829b9749aab70598f1b91e77e1-0.
INFO 03-02 00:58:52 [logger.py:42] Received request cmpl-d0cf0f572d0844b1b8a400a8d9686c3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:52 [async_llm.py:261] Added request cmpl-d0cf0f572d0844b1b8a400a8d9686c3c-0.
INFO 03-02 00:58:53 [logger.py:42] Received request cmpl-e69fa7e1fe7e4fc7bc4b6f9afed4ada3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:53 [async_llm.py:261] Added request cmpl-e69fa7e1fe7e4fc7bc4b6f9afed4ada3-0.
INFO 03-02 00:58:54 [logger.py:42] Received request cmpl-919c7ddf1fc24e8ab70311e5dbc70de6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:54 [async_llm.py:261] Added request cmpl-919c7ddf1fc24e8ab70311e5dbc70de6-0.
INFO 03-02 00:58:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:58:55 [logger.py:42] Received request cmpl-16928a3373874950b52ec2c3e6fceefe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:55 [async_llm.py:261] Added request cmpl-16928a3373874950b52ec2c3e6fceefe-0.
INFO 03-02 00:58:57 [logger.py:42] Received request cmpl-e6ac0629e3844510ac8d8f4e51cd3cb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:57 [async_llm.py:261] Added request cmpl-e6ac0629e3844510ac8d8f4e51cd3cb0-0.
INFO 03-02 00:58:58 [logger.py:42] Received request cmpl-0cabccd3370f4295a8d03cdddfd81844-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:58 [async_llm.py:261] Added request cmpl-0cabccd3370f4295a8d03cdddfd81844-0.
INFO 03-02 00:58:59 [logger.py:42] Received request cmpl-74f1a6948cce47c6b826db13299cd0d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:58:59 [async_llm.py:261] Added request cmpl-74f1a6948cce47c6b826db13299cd0d2-0.
INFO 03-02 00:59:00 [logger.py:42] Received request cmpl-08ac04a93d5e42d691ccc4a46e5e307c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:00 [async_llm.py:261] Added request cmpl-08ac04a93d5e42d691ccc4a46e5e307c-0.
INFO 03-02 00:59:01 [logger.py:42] Received request cmpl-52d74ab5ffe946c4a67009073c434a76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:01 [async_llm.py:261] Added request cmpl-52d74ab5ffe946c4a67009073c434a76-0.
INFO 03-02 00:59:02 [logger.py:42] Received request cmpl-5bc38bb923bb4059b986232dcc4bcc9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:02 [async_llm.py:261] Added request cmpl-5bc38bb923bb4059b986232dcc4bcc9d-0.
INFO 03-02 00:59:03 [logger.py:42] Received request cmpl-8d6a4c7f3526423782365a9636f4e3e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:03 [async_llm.py:261] Added request cmpl-8d6a4c7f3526423782365a9636f4e3e8-0.
INFO 03-02 00:59:04 [logger.py:42] Received request cmpl-adf92b04828f456a8b7f77696adf9514-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:04 [async_llm.py:261] Added request cmpl-adf92b04828f456a8b7f77696adf9514-0.
INFO 03-02 00:59:05 [logger.py:42] Received request cmpl-e0f0d30f43404761bca094576f43be4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:05 [async_llm.py:261] Added request cmpl-e0f0d30f43404761bca094576f43be4e-0.
INFO 03-02 00:59:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:06 [logger.py:42] Received request cmpl-e0ad0d4d9b9b424ab9ad33b4342d5daa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:06 [async_llm.py:261] Added request cmpl-e0ad0d4d9b9b424ab9ad33b4342d5daa-0.
INFO 03-02 00:59:07 [logger.py:42] Received request cmpl-437ebb31d7974e46aab8820fc8168502-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:07 [async_llm.py:261] Added request cmpl-437ebb31d7974e46aab8820fc8168502-0.
INFO 03-02 00:59:09 [logger.py:42] Received request cmpl-f3e25132bbda4f7b9400a82360a2897e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:09 [async_llm.py:261] Added request cmpl-f3e25132bbda4f7b9400a82360a2897e-0.
INFO 03-02 00:59:10 [logger.py:42] Received request cmpl-1445a21896cd455a8742dc8f0caead92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:10 [async_llm.py:261] Added request cmpl-1445a21896cd455a8742dc8f0caead92-0.
INFO 03-02 00:59:11 [logger.py:42] Received request cmpl-bdc2d62b09154e69a66085805df20475-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:11 [async_llm.py:261] Added request cmpl-bdc2d62b09154e69a66085805df20475-0.
INFO 03-02 00:59:12 [logger.py:42] Received request cmpl-989dd099da524261aad8b777cdadbaf8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:12 [async_llm.py:261] Added request cmpl-989dd099da524261aad8b777cdadbaf8-0.
INFO 03-02 00:59:13 [logger.py:42] Received request cmpl-7d30be111d004fa3a31bb2567f50a20a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:13 [async_llm.py:261] Added request cmpl-7d30be111d004fa3a31bb2567f50a20a-0.
INFO 03-02 00:59:14 [logger.py:42] Received request cmpl-8be5fb3d9bad4195a934a86274f8f7e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:14 [async_llm.py:261] Added request cmpl-8be5fb3d9bad4195a934a86274f8f7e8-0.
INFO 03-02 00:59:15 [logger.py:42] Received request cmpl-c6e68278378a4199a6f0224b26860901-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:15 [async_llm.py:261] Added request cmpl-c6e68278378a4199a6f0224b26860901-0.
INFO 03-02 00:59:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:16 [logger.py:42] Received request cmpl-5d3054b66c39401eb42f9553a9bdf15b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:16 [async_llm.py:261] Added request cmpl-5d3054b66c39401eb42f9553a9bdf15b-0.
INFO 03-02 00:59:17 [logger.py:42] Received request cmpl-219278a6646f45df8a02c47176e9a07d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:17 [async_llm.py:261] Added request cmpl-219278a6646f45df8a02c47176e9a07d-0.
INFO 03-02 00:59:18 [logger.py:42] Received request cmpl-aa78b387d682482296d6389a625f6d4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:18 [async_llm.py:261] Added request cmpl-aa78b387d682482296d6389a625f6d4a-0.
INFO 03-02 00:59:20 [logger.py:42] Received request cmpl-a50a17dfbde2402bb0ca6b70facd1f5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:20 [async_llm.py:261] Added request cmpl-a50a17dfbde2402bb0ca6b70facd1f5a-0.
INFO 03-02 00:59:21 [logger.py:42] Received request cmpl-d5834729c38d4ec4b053ccf5ad5c8259-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:21 [async_llm.py:261] Added request cmpl-d5834729c38d4ec4b053ccf5ad5c8259-0.
INFO 03-02 00:59:22 [logger.py:42] Received request cmpl-3e3031b98a4e4d44966f3d50eabe2505-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:22 [async_llm.py:261] Added request cmpl-3e3031b98a4e4d44966f3d50eabe2505-0.
INFO 03-02 00:59:23 [logger.py:42] Received request cmpl-5ad0926521cf471091cf1cde62386fad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:23 [async_llm.py:261] Added request cmpl-5ad0926521cf471091cf1cde62386fad-0.
INFO 03-02 00:59:24 [logger.py:42] Received request cmpl-b10b957b0bf644a6b3007e96adcc30f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:24 [async_llm.py:261] Added request cmpl-b10b957b0bf644a6b3007e96adcc30f4-0.
INFO 03-02 00:59:25 [logger.py:42] Received request cmpl-505452e4eecb436eba3a7b80f638c9d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:25 [async_llm.py:261] Added request cmpl-505452e4eecb436eba3a7b80f638c9d8-0.
INFO 03-02 00:59:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:26 [logger.py:42] Received request cmpl-09585a24c3154b15af24464b43c3da52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:26 [async_llm.py:261] Added request cmpl-09585a24c3154b15af24464b43c3da52-0.
INFO 03-02 00:59:27 [logger.py:42] Received request cmpl-06625471beb2479b93e44e20871ccc27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:27 [async_llm.py:261] Added request cmpl-06625471beb2479b93e44e20871ccc27-0.
INFO 03-02 00:59:28 [logger.py:42] Received request cmpl-aaf7c89f46fe48a5988a485e21fbb520-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:28 [async_llm.py:261] Added request cmpl-aaf7c89f46fe48a5988a485e21fbb520-0.
INFO 03-02 00:59:29 [logger.py:42] Received request cmpl-30083c4d14404e9ebc9e96e32808583f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:29 [async_llm.py:261] Added request cmpl-30083c4d14404e9ebc9e96e32808583f-0.
INFO 03-02 00:59:30 [logger.py:42] Received request cmpl-0d3ba78a59c549eea9dda82abbed17e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:30 [async_llm.py:261] Added request cmpl-0d3ba78a59c549eea9dda82abbed17e5-0.
INFO 03-02 00:59:32 [logger.py:42] Received request cmpl-5d2ed53e084f4be19103d4ce39134886-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:32 [async_llm.py:261] Added request cmpl-5d2ed53e084f4be19103d4ce39134886-0.
INFO 03-02 00:59:33 [logger.py:42] Received request cmpl-25d64f9e523f4162bebd55b3e68b8875-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:33 [async_llm.py:261] Added request cmpl-25d64f9e523f4162bebd55b3e68b8875-0.
INFO 03-02 00:59:34 [logger.py:42] Received request cmpl-1cbfd3cb54974909822cad93f41b03bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:34 [async_llm.py:261] Added request cmpl-1cbfd3cb54974909822cad93f41b03bb-0.
INFO 03-02 00:59:35 [logger.py:42] Received request cmpl-05f877a3d4e848e8aa124b9147176d36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:35 [async_llm.py:261] Added request cmpl-05f877a3d4e848e8aa124b9147176d36-0.
INFO 03-02 00:59:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:36 [logger.py:42] Received request cmpl-c44f1d9d754f44ca8146868e6bac383c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:36 [async_llm.py:261] Added request cmpl-c44f1d9d754f44ca8146868e6bac383c-0.
INFO 03-02 00:59:37 [logger.py:42] Received request cmpl-cd749a0cef6145849d56aea51f55e528-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:37 [async_llm.py:261] Added request cmpl-cd749a0cef6145849d56aea51f55e528-0.
INFO 03-02 00:59:38 [logger.py:42] Received request cmpl-9d0fbcc2ef034835b420292809fdc0fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:38 [async_llm.py:261] Added request cmpl-9d0fbcc2ef034835b420292809fdc0fd-0.
INFO 03-02 00:59:39 [logger.py:42] Received request cmpl-57776aee2599449db9242ca147c47890-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:39 [async_llm.py:261] Added request cmpl-57776aee2599449db9242ca147c47890-0.
INFO 03-02 00:59:40 [logger.py:42] Received request cmpl-d69887e31e084aae9db621936107d34b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:40 [async_llm.py:261] Added request cmpl-d69887e31e084aae9db621936107d34b-0.
INFO 03-02 00:59:41 [logger.py:42] Received request cmpl-1816733a36704350974f51437f75c415-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:41 [async_llm.py:261] Added request cmpl-1816733a36704350974f51437f75c415-0.
INFO 03-02 00:59:43 [logger.py:42] Received request cmpl-51a2e6026633443988b47adc1bb920d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:43 [async_llm.py:261] Added request cmpl-51a2e6026633443988b47adc1bb920d8-0.
INFO 03-02 00:59:44 [logger.py:42] Received request cmpl-2d9992198f894fe78c760216241dd013-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:44 [async_llm.py:261] Added request cmpl-2d9992198f894fe78c760216241dd013-0.
INFO 03-02 00:59:45 [logger.py:42] Received request cmpl-194eba5d06e5410781494b0d81c16bbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:45 [async_llm.py:261] Added request cmpl-194eba5d06e5410781494b0d81c16bbb-0.
INFO 03-02 00:59:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:46 [logger.py:42] Received request cmpl-1900c150d5494986b2a7d3e95646d9ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:46 [async_llm.py:261] Added request cmpl-1900c150d5494986b2a7d3e95646d9ec-0.
INFO 03-02 00:59:47 [logger.py:42] Received request cmpl-b4e38025f02141928da43711a0ca179c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:47 [async_llm.py:261] Added request cmpl-b4e38025f02141928da43711a0ca179c-0.
INFO 03-02 00:59:48 [logger.py:42] Received request cmpl-d1ff5176a5ff434786dc2e838320a502-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:48 [async_llm.py:261] Added request cmpl-d1ff5176a5ff434786dc2e838320a502-0.
INFO 03-02 00:59:49 [logger.py:42] Received request cmpl-67de8e6dfa854831b15c2053c402fe86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:49 [async_llm.py:261] Added request cmpl-67de8e6dfa854831b15c2053c402fe86-0.
INFO 03-02 00:59:50 [logger.py:42] Received request cmpl-aea44b3c49c94536be96e6370d377098-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:50 [async_llm.py:261] Added request cmpl-aea44b3c49c94536be96e6370d377098-0.
INFO 03-02 00:59:51 [logger.py:42] Received request cmpl-41a9367980bb4a7e85dd7ecb7dff8662-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:51 [async_llm.py:261] Added request cmpl-41a9367980bb4a7e85dd7ecb7dff8662-0.
INFO 03-02 00:59:52 [logger.py:42] Received request cmpl-a133f102ee0b4b3dbe010ea1d9a5c7e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:52 [async_llm.py:261] Added request cmpl-a133f102ee0b4b3dbe010ea1d9a5c7e8-0.
INFO 03-02 00:59:53 [logger.py:42] Received request cmpl-a4df7a5397d5443f83fe964e16ab735c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:53 [async_llm.py:261] Added request cmpl-a4df7a5397d5443f83fe964e16ab735c-0.
INFO 03-02 00:59:55 [logger.py:42] Received request cmpl-bad12764c94c40ef93bb484aa8c719d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:55 [async_llm.py:261] Added request cmpl-bad12764c94c40ef93bb484aa8c719d8-0.
INFO 03-02 00:59:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 00:59:56 [logger.py:42] Received request cmpl-80630c6f64b5441da0e953801a0be89b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:56 [async_llm.py:261] Added request cmpl-80630c6f64b5441da0e953801a0be89b-0.
INFO 03-02 00:59:57 [logger.py:42] Received request cmpl-2318919dd6b54b568da8468fecbc8c41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:57 [async_llm.py:261] Added request cmpl-2318919dd6b54b568da8468fecbc8c41-0.
INFO 03-02 00:59:58 [logger.py:42] Received request cmpl-840833d2b18647c8a4a6a158a09f15b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:58 [async_llm.py:261] Added request cmpl-840833d2b18647c8a4a6a158a09f15b7-0.
INFO 03-02 00:59:59 [logger.py:42] Received request cmpl-9c03529b9bad45a19858e72a05a1a2d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 00:59:59 [async_llm.py:261] Added request cmpl-9c03529b9bad45a19858e72a05a1a2d1-0.
INFO 03-02 01:00:00 [logger.py:42] Received request cmpl-38134a6902d04f91bba3c69fe2dd7ef0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:00 [async_llm.py:261] Added request cmpl-38134a6902d04f91bba3c69fe2dd7ef0-0.
INFO 03-02 01:00:01 [logger.py:42] Received request cmpl-fc79dd2f9ed84d87b120ba0089fa7b86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:01 [async_llm.py:261] Added request cmpl-fc79dd2f9ed84d87b120ba0089fa7b86-0.
INFO 03-02 01:00:02 [logger.py:42] Received request cmpl-d8488803084e4659afe1fcff5e0ed8c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:02 [async_llm.py:261] Added request cmpl-d8488803084e4659afe1fcff5e0ed8c4-0.
INFO 03-02 01:00:03 [logger.py:42] Received request cmpl-befdc46f503845f9ab982cadf725224a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:03 [async_llm.py:261] Added request cmpl-befdc46f503845f9ab982cadf725224a-0.
INFO 03-02 01:00:04 [logger.py:42] Received request cmpl-4987df7572e34d5d82e46f17ac5cb48f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:04 [async_llm.py:261] Added request cmpl-4987df7572e34d5d82e46f17ac5cb48f-0.
INFO 03-02 01:00:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:06 [logger.py:42] Received request cmpl-d457b5baba8049ce8c224a0c13ab9459-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:06 [async_llm.py:261] Added request cmpl-d457b5baba8049ce8c224a0c13ab9459-0.
INFO 03-02 01:00:07 [logger.py:42] Received request cmpl-10baafda197d4eb9b24cebe6ab38d787-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:07 [async_llm.py:261] Added request cmpl-10baafda197d4eb9b24cebe6ab38d787-0.
INFO 03-02 01:00:08 [logger.py:42] Received request cmpl-bd4811f2584c49168e6d3a39598011c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:08 [async_llm.py:261] Added request cmpl-bd4811f2584c49168e6d3a39598011c7-0.
INFO 03-02 01:00:09 [logger.py:42] Received request cmpl-bee13f69b2724ad190deecddbdc4df67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:09 [async_llm.py:261] Added request cmpl-bee13f69b2724ad190deecddbdc4df67-0.
INFO 03-02 01:00:10 [logger.py:42] Received request cmpl-3aa6e9432baa40df847e42762cd752b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:10 [async_llm.py:261] Added request cmpl-3aa6e9432baa40df847e42762cd752b0-0.
INFO 03-02 01:00:11 [logger.py:42] Received request cmpl-f69f82a7211e46d69a9d85ac22f245c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:11 [async_llm.py:261] Added request cmpl-f69f82a7211e46d69a9d85ac22f245c9-0.
INFO 03-02 01:00:12 [logger.py:42] Received request cmpl-fdabe79421be489f99b8d211ece45b22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:12 [async_llm.py:261] Added request cmpl-fdabe79421be489f99b8d211ece45b22-0.
INFO 03-02 01:00:13 [logger.py:42] Received request cmpl-d634ab5aa54342d0a7ba780f99e0b259-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:13 [async_llm.py:261] Added request cmpl-d634ab5aa54342d0a7ba780f99e0b259-0.
INFO 03-02 01:00:14 [logger.py:42] Received request cmpl-0df0df94749a4590890fae58295d52f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:14 [async_llm.py:261] Added request cmpl-0df0df94749a4590890fae58295d52f4-0.
INFO 03-02 01:00:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:15 [logger.py:42] Received request cmpl-9b601f18d63a457bb12604aecca85086-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:15 [async_llm.py:261] Added request cmpl-9b601f18d63a457bb12604aecca85086-0.
INFO 03-02 01:00:16 [logger.py:42] Received request cmpl-b5bed207244e4c08b7e036eae2236f56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:16 [async_llm.py:261] Added request cmpl-b5bed207244e4c08b7e036eae2236f56-0.
INFO 03-02 01:00:18 [logger.py:42] Received request cmpl-47cb664e0a84476cbe59e323c64803b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:18 [async_llm.py:261] Added request cmpl-47cb664e0a84476cbe59e323c64803b5-0.
INFO 03-02 01:00:19 [logger.py:42] Received request cmpl-583c6916de344a718a174c87b40d2a09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:19 [async_llm.py:261] Added request cmpl-583c6916de344a718a174c87b40d2a09-0.
INFO 03-02 01:00:20 [logger.py:42] Received request cmpl-06c1ff009c6042b0b05b59e65bb5d1fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:20 [async_llm.py:261] Added request cmpl-06c1ff009c6042b0b05b59e65bb5d1fc-0.
INFO 03-02 01:00:21 [logger.py:42] Received request cmpl-3272bcb332eb4f0da29e5d55b7754413-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:21 [async_llm.py:261] Added request cmpl-3272bcb332eb4f0da29e5d55b7754413-0.
INFO 03-02 01:00:22 [logger.py:42] Received request cmpl-989659e1b3804360adf8e3b8793965b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:22 [async_llm.py:261] Added request cmpl-989659e1b3804360adf8e3b8793965b4-0.
INFO 03-02 01:00:23 [logger.py:42] Received request cmpl-c5a699a8c8234d90b2f9644de2a80fac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:23 [async_llm.py:261] Added request cmpl-c5a699a8c8234d90b2f9644de2a80fac-0.
INFO 03-02 01:00:24 [logger.py:42] Received request cmpl-e54441a701d34a4c957b4eeb3ed9f5bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:24 [async_llm.py:261] Added request cmpl-e54441a701d34a4c957b4eeb3ed9f5bd-0.
INFO 03-02 01:00:25 [logger.py:42] Received request cmpl-b510bfc779f34bbeb6b185eba918487f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:25 [async_llm.py:261] Added request cmpl-b510bfc779f34bbeb6b185eba918487f-0.
INFO 03-02 01:00:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:26 [logger.py:42] Received request cmpl-5293e77e60714672b624c50de193d59d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:26 [async_llm.py:261] Added request cmpl-5293e77e60714672b624c50de193d59d-0.
INFO 03-02 01:00:27 [logger.py:42] Received request cmpl-2ab33f7a9b024e539a102c24c305d956-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:27 [async_llm.py:261] Added request cmpl-2ab33f7a9b024e539a102c24c305d956-0.
INFO 03-02 01:00:29 [logger.py:42] Received request cmpl-873c4455bccb4a13bdbdbc54fe2bac87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:29 [async_llm.py:261] Added request cmpl-873c4455bccb4a13bdbdbc54fe2bac87-0.
INFO 03-02 01:00:30 [logger.py:42] Received request cmpl-388bb03354fa4453b27b55710c913d38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:30 [async_llm.py:261] Added request cmpl-388bb03354fa4453b27b55710c913d38-0.
INFO 03-02 01:00:31 [logger.py:42] Received request cmpl-fb3d32c8fa4e4bc7b0f6c8fd00ecba91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:31 [async_llm.py:261] Added request cmpl-fb3d32c8fa4e4bc7b0f6c8fd00ecba91-0.
INFO 03-02 01:00:32 [logger.py:42] Received request cmpl-fea32311b7e04436b6ed20db8f2a6f3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:32 [async_llm.py:261] Added request cmpl-fea32311b7e04436b6ed20db8f2a6f3c-0.
INFO 03-02 01:00:33 [logger.py:42] Received request cmpl-2f76188f385546e5a7e0570c5df83883-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:33 [async_llm.py:261] Added request cmpl-2f76188f385546e5a7e0570c5df83883-0.
INFO 03-02 01:00:34 [logger.py:42] Received request cmpl-4e786045438f419a8ac5b5691b2e2be4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:34 [async_llm.py:261] Added request cmpl-4e786045438f419a8ac5b5691b2e2be4-0.
INFO 03-02 01:00:35 [logger.py:42] Received request cmpl-a6b17c9bbb1c48cf8e6c12692b3c59bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:35 [async_llm.py:261] Added request cmpl-a6b17c9bbb1c48cf8e6c12692b3c59bc-0.
INFO 03-02 01:00:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:36 [logger.py:42] Received request cmpl-a88101ea7ef74d138a6e80ce835c18e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:36 [async_llm.py:261] Added request cmpl-a88101ea7ef74d138a6e80ce835c18e6-0.
INFO 03-02 01:00:37 [logger.py:42] Received request cmpl-a02f35af66ba4270bb82bae7a6f00b02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:37 [async_llm.py:261] Added request cmpl-a02f35af66ba4270bb82bae7a6f00b02-0.
INFO 03-02 01:00:38 [logger.py:42] Received request cmpl-189c1f9f434547468f12bea46109af30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:38 [async_llm.py:261] Added request cmpl-189c1f9f434547468f12bea46109af30-0.
INFO 03-02 01:00:39 [logger.py:42] Received request cmpl-5bb0ada12fef4b6698d14a2a76cfee3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:39 [async_llm.py:261] Added request cmpl-5bb0ada12fef4b6698d14a2a76cfee3a-0.
INFO 03-02 01:00:41 [logger.py:42] Received request cmpl-26384fb47ae0423bb2f3cf04d87966ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:41 [async_llm.py:261] Added request cmpl-26384fb47ae0423bb2f3cf04d87966ab-0.
INFO 03-02 01:00:42 [logger.py:42] Received request cmpl-1c05adc76f66493f97b0e6e187b1de77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:42 [async_llm.py:261] Added request cmpl-1c05adc76f66493f97b0e6e187b1de77-0.
INFO 03-02 01:00:43 [logger.py:42] Received request cmpl-0197a0e0a2dd4298a37aa87a96a9f265-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:43 [async_llm.py:261] Added request cmpl-0197a0e0a2dd4298a37aa87a96a9f265-0.
INFO 03-02 01:00:44 [logger.py:42] Received request cmpl-4b02ba3f908c4c02a4ad5234f1ac0f4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:44 [async_llm.py:261] Added request cmpl-4b02ba3f908c4c02a4ad5234f1ac0f4a-0.
INFO 03-02 01:00:45 [logger.py:42] Received request cmpl-c68b18c6c4794d4e98a55dedef6b2c49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:45 [async_llm.py:261] Added request cmpl-c68b18c6c4794d4e98a55dedef6b2c49-0.
INFO 03-02 01:00:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:46 [logger.py:42] Received request cmpl-d8ad0a080df348c397643bd119073cae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:46 [async_llm.py:261] Added request cmpl-d8ad0a080df348c397643bd119073cae-0.
INFO 03-02 01:00:47 [logger.py:42] Received request cmpl-3cd728e85ddb4f619eb02fb1f950a069-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:47 [async_llm.py:261] Added request cmpl-3cd728e85ddb4f619eb02fb1f950a069-0.
INFO 03-02 01:00:48 [logger.py:42] Received request cmpl-f9c6cd8e63d7482b927a7b17fc6f5b47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:48 [async_llm.py:261] Added request cmpl-f9c6cd8e63d7482b927a7b17fc6f5b47-0.
INFO 03-02 01:00:49 [logger.py:42] Received request cmpl-2167aa4db60243ceb8735262299a5558-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:49 [async_llm.py:261] Added request cmpl-2167aa4db60243ceb8735262299a5558-0.
INFO 03-02 01:00:50 [logger.py:42] Received request cmpl-aa9396ed49f840a5be6a3356013b65d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:50 [async_llm.py:261] Added request cmpl-aa9396ed49f840a5be6a3356013b65d3-0.
INFO 03-02 01:00:52 [logger.py:42] Received request cmpl-01bcd48bcae84d04b706683bf2887384-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:52 [async_llm.py:261] Added request cmpl-01bcd48bcae84d04b706683bf2887384-0.
INFO 03-02 01:00:53 [logger.py:42] Received request cmpl-d159eec0b22f4047a7656154f8bea1ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:53 [async_llm.py:261] Added request cmpl-d159eec0b22f4047a7656154f8bea1ba-0.
INFO 03-02 01:00:54 [logger.py:42] Received request cmpl-d8a8d199ae29472daddde77e6f8f9966-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:54 [async_llm.py:261] Added request cmpl-d8a8d199ae29472daddde77e6f8f9966-0.
INFO 03-02 01:00:55 [logger.py:42] Received request cmpl-e1c6c923296f4781b18b661290319c29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:55 [async_llm.py:261] Added request cmpl-e1c6c923296f4781b18b661290319c29-0.
INFO 03-02 01:00:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:00:56 [logger.py:42] Received request cmpl-715198128fff4166afc0a8d24420a6f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:56 [async_llm.py:261] Added request cmpl-715198128fff4166afc0a8d24420a6f7-0.
INFO 03-02 01:00:57 [logger.py:42] Received request cmpl-66deb599e29940b5bfa416fe1db5fe4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:57 [async_llm.py:261] Added request cmpl-66deb599e29940b5bfa416fe1db5fe4d-0.
INFO 03-02 01:00:58 [logger.py:42] Received request cmpl-d64962a9eeee4a61931e47930dd32495-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:58 [async_llm.py:261] Added request cmpl-d64962a9eeee4a61931e47930dd32495-0.
INFO 03-02 01:00:59 [logger.py:42] Received request cmpl-7ff28edc1aba41068de24f266e055e0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:00:59 [async_llm.py:261] Added request cmpl-7ff28edc1aba41068de24f266e055e0a-0.
INFO 03-02 01:01:00 [logger.py:42] Received request cmpl-689645d3ed8a4714b1fe09ab90ac0fba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:00 [async_llm.py:261] Added request cmpl-689645d3ed8a4714b1fe09ab90ac0fba-0.
INFO 03-02 01:01:01 [logger.py:42] Received request cmpl-b2f62a35b44e47c1aa37eb181ae5a121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:01 [async_llm.py:261] Added request cmpl-b2f62a35b44e47c1aa37eb181ae5a121-0.
INFO 03-02 01:01:02 [logger.py:42] Received request cmpl-3a78654df78442158abb178185152a03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:02 [async_llm.py:261] Added request cmpl-3a78654df78442158abb178185152a03-0.
INFO 03-02 01:01:04 [logger.py:42] Received request cmpl-90980080504f4558a8e379c173fe3839-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:04 [async_llm.py:261] Added request cmpl-90980080504f4558a8e379c173fe3839-0.
INFO 03-02 01:01:05 [logger.py:42] Received request cmpl-23ab663956b24f7c861a847a9c54ea49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:05 [async_llm.py:261] Added request cmpl-23ab663956b24f7c861a847a9c54ea49-0.
INFO 03-02 01:01:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:06 [logger.py:42] Received request cmpl-bafbc8358958451790d0578e2e8cb68b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:06 [async_llm.py:261] Added request cmpl-bafbc8358958451790d0578e2e8cb68b-0.
INFO 03-02 01:01:07 [logger.py:42] Received request cmpl-46b7b9ec6460407fbf0cb13ad0ac04e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:07 [async_llm.py:261] Added request cmpl-46b7b9ec6460407fbf0cb13ad0ac04e4-0.
INFO 03-02 01:01:08 [logger.py:42] Received request cmpl-9b4944d654e84e1f86a8020eb7dfa8a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:08 [async_llm.py:261] Added request cmpl-9b4944d654e84e1f86a8020eb7dfa8a1-0.
INFO 03-02 01:01:09 [logger.py:42] Received request cmpl-905d3337312441e0b2719bf82f09b108-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:09 [async_llm.py:261] Added request cmpl-905d3337312441e0b2719bf82f09b108-0.
INFO 03-02 01:01:10 [logger.py:42] Received request cmpl-9b7f20a261f64098a8dc19b9aa75f6d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:10 [async_llm.py:261] Added request cmpl-9b7f20a261f64098a8dc19b9aa75f6d7-0.
INFO 03-02 01:01:11 [logger.py:42] Received request cmpl-b7f57ff8d4994c4c97beb73999778dc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:11 [async_llm.py:261] Added request cmpl-b7f57ff8d4994c4c97beb73999778dc0-0.
INFO 03-02 01:01:12 [logger.py:42] Received request cmpl-a174eae79a8c4c968b9f6e5e749bef13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:12 [async_llm.py:261] Added request cmpl-a174eae79a8c4c968b9f6e5e749bef13-0.
INFO 03-02 01:01:13 [logger.py:42] Received request cmpl-f68abab5132b4e93adc70f06bc066e48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:13 [async_llm.py:261] Added request cmpl-f68abab5132b4e93adc70f06bc066e48-0.
INFO 03-02 01:01:15 [logger.py:42] Received request cmpl-a99bcd8959db4519bd0f0065279f60c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:15 [async_llm.py:261] Added request cmpl-a99bcd8959db4519bd0f0065279f60c1-0.
INFO 03-02 01:01:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:16 [logger.py:42] Received request cmpl-d0f6339d49db4c3aa63dde28e74db84a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:16 [async_llm.py:261] Added request cmpl-d0f6339d49db4c3aa63dde28e74db84a-0.
INFO 03-02 01:01:17 [logger.py:42] Received request cmpl-9badb445d24d44e584d95bde75a4bc84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:17 [async_llm.py:261] Added request cmpl-9badb445d24d44e584d95bde75a4bc84-0.
INFO 03-02 01:01:18 [logger.py:42] Received request cmpl-9c8cc92598ed4a90b72e7117f1ba0704-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:18 [async_llm.py:261] Added request cmpl-9c8cc92598ed4a90b72e7117f1ba0704-0.
INFO 03-02 01:01:19 [logger.py:42] Received request cmpl-bf2f6f92a9ed4528b9d6bde6ee978b85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:19 [async_llm.py:261] Added request cmpl-bf2f6f92a9ed4528b9d6bde6ee978b85-0.
INFO 03-02 01:01:20 [logger.py:42] Received request cmpl-081e169def0a4c08bccd537e2dba3d98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:20 [async_llm.py:261] Added request cmpl-081e169def0a4c08bccd537e2dba3d98-0.
INFO 03-02 01:01:21 [logger.py:42] Received request cmpl-60220167aea14f6c84b26bb7b755ae65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:21 [async_llm.py:261] Added request cmpl-60220167aea14f6c84b26bb7b755ae65-0.
INFO 03-02 01:01:22 [logger.py:42] Received request cmpl-e3c0e1ac77cf4a528422db5c842f6841-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:22 [async_llm.py:261] Added request cmpl-e3c0e1ac77cf4a528422db5c842f6841-0.
INFO 03-02 01:01:23 [logger.py:42] Received request cmpl-d26db63035774eb284c9a3de24ca881d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:23 [async_llm.py:261] Added request cmpl-d26db63035774eb284c9a3de24ca881d-0.
INFO 03-02 01:01:24 [logger.py:42] Received request cmpl-1a721319fa584f3fbef610508cbca0fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:24 [async_llm.py:261] Added request cmpl-1a721319fa584f3fbef610508cbca0fd-0.
INFO 03-02 01:01:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:25 [logger.py:42] Received request cmpl-5b3a7077843f4540bde6a6adb84d6de9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:25 [async_llm.py:261] Added request cmpl-5b3a7077843f4540bde6a6adb84d6de9-0.
INFO 03-02 01:01:27 [logger.py:42] Received request cmpl-861a2bdd304346c7a21d013831cfe216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:27 [async_llm.py:261] Added request cmpl-861a2bdd304346c7a21d013831cfe216-0.
INFO 03-02 01:01:28 [logger.py:42] Received request cmpl-021f0624db72434488d7a4a2ef579fab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:28 [async_llm.py:261] Added request cmpl-021f0624db72434488d7a4a2ef579fab-0.
INFO 03-02 01:01:29 [logger.py:42] Received request cmpl-2155684381a24bebac47878a255de241-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:29 [async_llm.py:261] Added request cmpl-2155684381a24bebac47878a255de241-0.
INFO 03-02 01:01:30 [logger.py:42] Received request cmpl-734b52f32de04accb1a727bd9c21b0af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:30 [async_llm.py:261] Added request cmpl-734b52f32de04accb1a727bd9c21b0af-0.
INFO 03-02 01:01:31 [logger.py:42] Received request cmpl-194bd1fb950546c4945ceb2a791e9827-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:31 [async_llm.py:261] Added request cmpl-194bd1fb950546c4945ceb2a791e9827-0.
INFO 03-02 01:01:32 [logger.py:42] Received request cmpl-89de84986e954ddc83abf09740fad681-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:32 [async_llm.py:261] Added request cmpl-89de84986e954ddc83abf09740fad681-0.
INFO 03-02 01:01:33 [logger.py:42] Received request cmpl-110f7ac0f5e745beb6e56f3685c1067f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:33 [async_llm.py:261] Added request cmpl-110f7ac0f5e745beb6e56f3685c1067f-0.
INFO 03-02 01:01:34 [logger.py:42] Received request cmpl-29551d9c30a44d91a0826013c141ee5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:34 [async_llm.py:261] Added request cmpl-29551d9c30a44d91a0826013c141ee5e-0.
INFO 03-02 01:01:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:35 [logger.py:42] Received request cmpl-bd8f0bcf109f4049b148bbd5e2003b64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:35 [async_llm.py:261] Added request cmpl-bd8f0bcf109f4049b148bbd5e2003b64-0.
INFO 03-02 01:01:36 [logger.py:42] Received request cmpl-076b8887bcb14d238260d85b0edd45ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:36 [async_llm.py:261] Added request cmpl-076b8887bcb14d238260d85b0edd45ce-0.
INFO 03-02 01:01:38 [logger.py:42] Received request cmpl-3a8b5298f8ee4051aab1246f3a77cc87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:38 [async_llm.py:261] Added request cmpl-3a8b5298f8ee4051aab1246f3a77cc87-0.
INFO 03-02 01:01:39 [logger.py:42] Received request cmpl-048824b494b840f9a71a72effbc32e78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:39 [async_llm.py:261] Added request cmpl-048824b494b840f9a71a72effbc32e78-0.
INFO 03-02 01:01:40 [logger.py:42] Received request cmpl-afeb722a0a1e47a3b4ad08246a8ae665-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:40 [async_llm.py:261] Added request cmpl-afeb722a0a1e47a3b4ad08246a8ae665-0.
INFO 03-02 01:01:41 [logger.py:42] Received request cmpl-3a23eb6e501f4bf49addbe0558e5c4c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:41 [async_llm.py:261] Added request cmpl-3a23eb6e501f4bf49addbe0558e5c4c9-0.
INFO 03-02 01:01:42 [logger.py:42] Received request cmpl-5a9de2e1e737461caeb1d0b14bd4a89a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:42 [async_llm.py:261] Added request cmpl-5a9de2e1e737461caeb1d0b14bd4a89a-0.
INFO 03-02 01:01:43 [logger.py:42] Received request cmpl-89d47ae287834d00a945c0296f0672c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:43 [async_llm.py:261] Added request cmpl-89d47ae287834d00a945c0296f0672c2-0.
INFO 03-02 01:01:44 [logger.py:42] Received request cmpl-77c94313c59145a2925b7e07096ed879-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:44 [async_llm.py:261] Added request cmpl-77c94313c59145a2925b7e07096ed879-0.
INFO 03-02 01:01:45 [logger.py:42] Received request cmpl-b7951d5a35e94f76aeea76ef084bfd36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:45 [async_llm.py:261] Added request cmpl-b7951d5a35e94f76aeea76ef084bfd36-0.
INFO 03-02 01:01:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:46 [logger.py:42] Received request cmpl-a77b6f3bc16e46e59612173a7d7bb710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:46 [async_llm.py:261] Added request cmpl-a77b6f3bc16e46e59612173a7d7bb710-0.
INFO 03-02 01:01:47 [logger.py:42] Received request cmpl-7b8b644b9d7f4ce49ecec98409362b2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:47 [async_llm.py:261] Added request cmpl-7b8b644b9d7f4ce49ecec98409362b2e-0.
INFO 03-02 01:01:49 [logger.py:42] Received request cmpl-20de07e3b60f4b619237e6c09b1049bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:49 [async_llm.py:261] Added request cmpl-20de07e3b60f4b619237e6c09b1049bd-0.
INFO 03-02 01:01:50 [logger.py:42] Received request cmpl-f26c25ccb82646ebb4ca6ea4ce3d783d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:50 [async_llm.py:261] Added request cmpl-f26c25ccb82646ebb4ca6ea4ce3d783d-0.
INFO 03-02 01:01:51 [logger.py:42] Received request cmpl-17dead11139749299c10cf7886f91756-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:51 [async_llm.py:261] Added request cmpl-17dead11139749299c10cf7886f91756-0.
INFO 03-02 01:01:52 [logger.py:42] Received request cmpl-b5498ff7421e4375b70ecf63de2ebf9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:52 [async_llm.py:261] Added request cmpl-b5498ff7421e4375b70ecf63de2ebf9d-0.
INFO 03-02 01:01:53 [logger.py:42] Received request cmpl-5333a8f9db0e430cab4ebe6578f6176a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:53 [async_llm.py:261] Added request cmpl-5333a8f9db0e430cab4ebe6578f6176a-0.
INFO 03-02 01:01:54 [logger.py:42] Received request cmpl-57aeedcef4f94b50a5fedab4594d00af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:54 [async_llm.py:261] Added request cmpl-57aeedcef4f94b50a5fedab4594d00af-0.
INFO 03-02 01:01:55 [logger.py:42] Received request cmpl-73162a6569a84afba2f92ba6bdbe7807-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:55 [async_llm.py:261] Added request cmpl-73162a6569a84afba2f92ba6bdbe7807-0.
INFO 03-02 01:01:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:01:56 [logger.py:42] Received request cmpl-591f2fe741eb4de6ace293b10df0f62a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:56 [async_llm.py:261] Added request cmpl-591f2fe741eb4de6ace293b10df0f62a-0.
INFO 03-02 01:01:57 [logger.py:42] Received request cmpl-db75ceec8d66417592526010854d4ff0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:57 [async_llm.py:261] Added request cmpl-db75ceec8d66417592526010854d4ff0-0.
INFO 03-02 01:01:58 [logger.py:42] Received request cmpl-3076ff3662d041b9a6134da49bbad0b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:58 [async_llm.py:261] Added request cmpl-3076ff3662d041b9a6134da49bbad0b7-0.
INFO 03-02 01:01:59 [logger.py:42] Received request cmpl-d8ed3de39bba4be99f4c85f2a3659f34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:01:59 [async_llm.py:261] Added request cmpl-d8ed3de39bba4be99f4c85f2a3659f34-0.
INFO 03-02 01:02:01 [logger.py:42] Received request cmpl-dd223b0370f04024a38b2d752abfff07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:01 [async_llm.py:261] Added request cmpl-dd223b0370f04024a38b2d752abfff07-0.
INFO 03-02 01:02:02 [logger.py:42] Received request cmpl-4fd9bbb364e445e1ab25523be2b46890-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:02 [async_llm.py:261] Added request cmpl-4fd9bbb364e445e1ab25523be2b46890-0.
INFO 03-02 01:02:03 [logger.py:42] Received request cmpl-2a48a4b1a8e34cf4856df402840498f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:03 [async_llm.py:261] Added request cmpl-2a48a4b1a8e34cf4856df402840498f8-0.
INFO 03-02 01:02:04 [logger.py:42] Received request cmpl-d41e4064b539457887016e00c63edba7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:04 [async_llm.py:261] Added request cmpl-d41e4064b539457887016e00c63edba7-0.
INFO 03-02 01:02:05 [logger.py:42] Received request cmpl-c739af2ceba44e16a2021e4273e183fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:05 [async_llm.py:261] Added request cmpl-c739af2ceba44e16a2021e4273e183fb-0.
INFO 03-02 01:02:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:06 [logger.py:42] Received request cmpl-d1191bdf05f44affbe30063671840c75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:06 [async_llm.py:261] Added request cmpl-d1191bdf05f44affbe30063671840c75-0.
INFO 03-02 01:02:07 [logger.py:42] Received request cmpl-9e2e68fbfee64b19ae46b10422f777ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:07 [async_llm.py:261] Added request cmpl-9e2e68fbfee64b19ae46b10422f777ad-0.
INFO 03-02 01:02:08 [logger.py:42] Received request cmpl-11bb4d9d0025499e87de3e71d639e54f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:08 [async_llm.py:261] Added request cmpl-11bb4d9d0025499e87de3e71d639e54f-0.
INFO 03-02 01:02:09 [logger.py:42] Received request cmpl-842393a2da2d48b4b23401c4d8d68964-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:09 [async_llm.py:261] Added request cmpl-842393a2da2d48b4b23401c4d8d68964-0.
INFO 03-02 01:02:10 [logger.py:42] Received request cmpl-d9d9e2438be844fb83bf4a2cdd7d2aca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:10 [async_llm.py:261] Added request cmpl-d9d9e2438be844fb83bf4a2cdd7d2aca-0.
INFO 03-02 01:02:12 [logger.py:42] Received request cmpl-b95d7e0cd0bd4be38d12bb4601798c13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:12 [async_llm.py:261] Added request cmpl-b95d7e0cd0bd4be38d12bb4601798c13-0.
INFO 03-02 01:02:13 [logger.py:42] Received request cmpl-616e82331af245f1bbc14cccae138d49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:13 [async_llm.py:261] Added request cmpl-616e82331af245f1bbc14cccae138d49-0.
INFO 03-02 01:02:14 [logger.py:42] Received request cmpl-cfd7db7f0a64451c9b062df22fdd67e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:14 [async_llm.py:261] Added request cmpl-cfd7db7f0a64451c9b062df22fdd67e7-0.
INFO 03-02 01:02:15 [logger.py:42] Received request cmpl-454026c7e2974562ac9b095f2b3d61b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:15 [async_llm.py:261] Added request cmpl-454026c7e2974562ac9b095f2b3d61b9-0.
INFO 03-02 01:02:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:16 [logger.py:42] Received request cmpl-c0db46bc2d3f474087f508e9dcc2878f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:16 [async_llm.py:261] Added request cmpl-c0db46bc2d3f474087f508e9dcc2878f-0.
INFO 03-02 01:02:17 [logger.py:42] Received request cmpl-6dfcaa01a6c74c039de139a8360ec260-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:17 [async_llm.py:261] Added request cmpl-6dfcaa01a6c74c039de139a8360ec260-0.
INFO 03-02 01:02:18 [logger.py:42] Received request cmpl-79da11555f904667bd71b82645d55946-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:18 [async_llm.py:261] Added request cmpl-79da11555f904667bd71b82645d55946-0.
INFO 03-02 01:02:19 [logger.py:42] Received request cmpl-d7f2c1948165418bb7cd096f2da6acb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:19 [async_llm.py:261] Added request cmpl-d7f2c1948165418bb7cd096f2da6acb7-0.
INFO 03-02 01:02:20 [logger.py:42] Received request cmpl-ef97b9cedbef4ab7a4d3b5ce1849d8d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:20 [async_llm.py:261] Added request cmpl-ef97b9cedbef4ab7a4d3b5ce1849d8d1-0.
INFO 03-02 01:02:21 [logger.py:42] Received request cmpl-9e7c3d046536485aa6f41f99bd20056a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:21 [async_llm.py:261] Added request cmpl-9e7c3d046536485aa6f41f99bd20056a-0.
INFO 03-02 01:02:22 [logger.py:42] Received request cmpl-47977d2bf0934302b2a99c88712a1d77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:22 [async_llm.py:261] Added request cmpl-47977d2bf0934302b2a99c88712a1d77-0.
INFO 03-02 01:02:24 [logger.py:42] Received request cmpl-ab51277ea7a14423b3f7026321633b99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:24 [async_llm.py:261] Added request cmpl-ab51277ea7a14423b3f7026321633b99-0.
INFO 03-02 01:02:25 [logger.py:42] Received request cmpl-0fb0ac3cabfa48d5b747de926968e0bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:25 [async_llm.py:261] Added request cmpl-0fb0ac3cabfa48d5b747de926968e0bf-0.
INFO 03-02 01:02:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:26 [logger.py:42] Received request cmpl-0effad3ea54c488babc279f0b29750b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:26 [async_llm.py:261] Added request cmpl-0effad3ea54c488babc279f0b29750b7-0.
INFO 03-02 01:02:27 [logger.py:42] Received request cmpl-e66d2227e363429380d9e56621abf83b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:27 [async_llm.py:261] Added request cmpl-e66d2227e363429380d9e56621abf83b-0.
INFO 03-02 01:02:28 [logger.py:42] Received request cmpl-6f24ae8faf2544b1a0e86aed15e91b42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:28 [async_llm.py:261] Added request cmpl-6f24ae8faf2544b1a0e86aed15e91b42-0.
INFO 03-02 01:02:29 [logger.py:42] Received request cmpl-50ee6dd5294447bc8741046834ff9e50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:29 [async_llm.py:261] Added request cmpl-50ee6dd5294447bc8741046834ff9e50-0.
INFO 03-02 01:02:30 [logger.py:42] Received request cmpl-7810ea50fdcc499a99d5a5fe7ac12be7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:30 [async_llm.py:261] Added request cmpl-7810ea50fdcc499a99d5a5fe7ac12be7-0.
INFO 03-02 01:02:31 [logger.py:42] Received request cmpl-d0da1e03e8ce48868c4aa2a3232b1609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:31 [async_llm.py:261] Added request cmpl-d0da1e03e8ce48868c4aa2a3232b1609-0.
INFO 03-02 01:02:32 [logger.py:42] Received request cmpl-87a70e3aa1014ccd8d319c923a09d84d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:32 [async_llm.py:261] Added request cmpl-87a70e3aa1014ccd8d319c923a09d84d-0.
INFO 03-02 01:02:33 [logger.py:42] Received request cmpl-6847c95790804f3dbb51de16634e2fcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:33 [async_llm.py:261] Added request cmpl-6847c95790804f3dbb51de16634e2fcc-0.
INFO 03-02 01:02:35 [logger.py:42] Received request cmpl-83a4d34c7ca446e39d39f7eed3543d6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:35 [async_llm.py:261] Added request cmpl-83a4d34c7ca446e39d39f7eed3543d6d-0.
INFO 03-02 01:02:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:36 [logger.py:42] Received request cmpl-86f71e09d9de41acaa17ab12a6900b1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:36 [async_llm.py:261] Added request cmpl-86f71e09d9de41acaa17ab12a6900b1a-0.
INFO 03-02 01:02:37 [logger.py:42] Received request cmpl-bb2c9582a3f9467d94b7943001ab9f4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:37 [async_llm.py:261] Added request cmpl-bb2c9582a3f9467d94b7943001ab9f4b-0.
INFO 03-02 01:02:38 [logger.py:42] Received request cmpl-fe926ae838314d38996db5dc0d839817-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:38 [async_llm.py:261] Added request cmpl-fe926ae838314d38996db5dc0d839817-0.
INFO 03-02 01:02:39 [logger.py:42] Received request cmpl-2b78c008a3a04186adaf8f1a99c89efd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:39 [async_llm.py:261] Added request cmpl-2b78c008a3a04186adaf8f1a99c89efd-0.
INFO 03-02 01:02:40 [logger.py:42] Received request cmpl-5c4657c332f740c59c2e516115549607-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:40 [async_llm.py:261] Added request cmpl-5c4657c332f740c59c2e516115549607-0.
INFO 03-02 01:02:41 [logger.py:42] Received request cmpl-53f9195a2cad42838701b1be85f6acc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:41 [async_llm.py:261] Added request cmpl-53f9195a2cad42838701b1be85f6acc2-0.
INFO 03-02 01:02:42 [logger.py:42] Received request cmpl-854ca4b8c8414cffbaeba9b2935cc5c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:42 [async_llm.py:261] Added request cmpl-854ca4b8c8414cffbaeba9b2935cc5c6-0.
INFO 03-02 01:02:43 [logger.py:42] Received request cmpl-65f3f2fab4de49d89a7ee06fef5011c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:43 [async_llm.py:261] Added request cmpl-65f3f2fab4de49d89a7ee06fef5011c0-0.
INFO 03-02 01:02:44 [logger.py:42] Received request cmpl-3110640a8fe8468f996286d11693efb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:44 [async_llm.py:261] Added request cmpl-3110640a8fe8468f996286d11693efb9-0.
INFO 03-02 01:02:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:46 [logger.py:42] Received request cmpl-7b3a3ad330b045e5bd746b30f1d3455c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:46 [async_llm.py:261] Added request cmpl-7b3a3ad330b045e5bd746b30f1d3455c-0.
INFO 03-02 01:02:47 [logger.py:42] Received request cmpl-75fb45cc1b864fe1a047718b90e37030-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:47 [async_llm.py:261] Added request cmpl-75fb45cc1b864fe1a047718b90e37030-0.
INFO 03-02 01:02:48 [logger.py:42] Received request cmpl-bc60ea9ab9d14bcba244c30cbecd8c66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:48 [async_llm.py:261] Added request cmpl-bc60ea9ab9d14bcba244c30cbecd8c66-0.
INFO 03-02 01:02:49 [logger.py:42] Received request cmpl-c89b1459172c410d80177affed54e4b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:49 [async_llm.py:261] Added request cmpl-c89b1459172c410d80177affed54e4b0-0.
INFO 03-02 01:02:50 [logger.py:42] Received request cmpl-d9c56e78eaf74bb5800de1fcac9ee2c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:50 [async_llm.py:261] Added request cmpl-d9c56e78eaf74bb5800de1fcac9ee2c2-0.
INFO 03-02 01:02:51 [logger.py:42] Received request cmpl-9ca8e81d4c79461b831f28c7c0fba864-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:51 [async_llm.py:261] Added request cmpl-9ca8e81d4c79461b831f28c7c0fba864-0.
INFO 03-02 01:02:52 [logger.py:42] Received request cmpl-06d30468ce574e9eb6a5bc2c441d4698-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:52 [async_llm.py:261] Added request cmpl-06d30468ce574e9eb6a5bc2c441d4698-0.
INFO 03-02 01:02:53 [logger.py:42] Received request cmpl-13e111f8c7374b2386fc95da9e4f6c91-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:53 [async_llm.py:261] Added request cmpl-13e111f8c7374b2386fc95da9e4f6c91-0.
INFO 03-02 01:02:54 [logger.py:42] Received request cmpl-3fab42fa4f4b4b028976e8e01328adf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:54 [async_llm.py:261] Added request cmpl-3fab42fa4f4b4b028976e8e01328adf4-0.
INFO 03-02 01:02:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:02:55 [logger.py:42] Received request cmpl-7435f82fcf724a1086a961d45d24afb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:55 [async_llm.py:261] Added request cmpl-7435f82fcf724a1086a961d45d24afb8-0.
INFO 03-02 01:02:56 [logger.py:42] Received request cmpl-afc35f4419bc496ea239c769064a122b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:56 [async_llm.py:261] Added request cmpl-afc35f4419bc496ea239c769064a122b-0.
INFO 03-02 01:02:58 [logger.py:42] Received request cmpl-ff2762df234246f9becc66fb770da0f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:58 [async_llm.py:261] Added request cmpl-ff2762df234246f9becc66fb770da0f3-0.
INFO 03-02 01:02:59 [logger.py:42] Received request cmpl-93e59b07e40e4df3884d62b728cfd7c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:02:59 [async_llm.py:261] Added request cmpl-93e59b07e40e4df3884d62b728cfd7c1-0.
INFO 03-02 01:03:00 [logger.py:42] Received request cmpl-e2cb1733d6124d3cb9eaff6cbc4a2ca8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:00 [async_llm.py:261] Added request cmpl-e2cb1733d6124d3cb9eaff6cbc4a2ca8-0.
INFO 03-02 01:03:01 [logger.py:42] Received request cmpl-0b0c3afb28c14d0f8e43329d2967862d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:01 [async_llm.py:261] Added request cmpl-0b0c3afb28c14d0f8e43329d2967862d-0.
INFO 03-02 01:03:02 [logger.py:42] Received request cmpl-be0ff29e2e944b8bb93ec94ed34b8bc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:02 [async_llm.py:261] Added request cmpl-be0ff29e2e944b8bb93ec94ed34b8bc1-0.
INFO 03-02 01:03:03 [logger.py:42] Received request cmpl-2b3fe86290dd4b34810523562ad9bc0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:03 [async_llm.py:261] Added request cmpl-2b3fe86290dd4b34810523562ad9bc0c-0.
INFO 03-02 01:03:04 [logger.py:42] Received request cmpl-60a6a5b57baa499a89b7700fd2412b1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:04 [async_llm.py:261] Added request cmpl-60a6a5b57baa499a89b7700fd2412b1c-0.
INFO 03-02 01:03:05 [logger.py:42] Received request cmpl-eed5cb39462a4c52a945cc1b74e35913-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:05 [async_llm.py:261] Added request cmpl-eed5cb39462a4c52a945cc1b74e35913-0.
INFO 03-02 01:03:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:06 [logger.py:42] Received request cmpl-7467edf32bc8435aa05ec0464aee7571-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:06 [async_llm.py:261] Added request cmpl-7467edf32bc8435aa05ec0464aee7571-0.
INFO 03-02 01:03:07 [logger.py:42] Received request cmpl-5a49369c7e7342b2b79030c3144431eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:07 [async_llm.py:261] Added request cmpl-5a49369c7e7342b2b79030c3144431eb-0.
INFO 03-02 01:03:09 [logger.py:42] Received request cmpl-d9e53133a0c74a53ba24306f411637cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:09 [async_llm.py:261] Added request cmpl-d9e53133a0c74a53ba24306f411637cc-0.
INFO 03-02 01:03:10 [logger.py:42] Received request cmpl-9ba599e0a6674561b29b7e2a70347855-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:10 [async_llm.py:261] Added request cmpl-9ba599e0a6674561b29b7e2a70347855-0.
INFO 03-02 01:03:11 [logger.py:42] Received request cmpl-7a908b636db14323bbed1998f8402c73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:11 [async_llm.py:261] Added request cmpl-7a908b636db14323bbed1998f8402c73-0.
INFO 03-02 01:03:12 [logger.py:42] Received request cmpl-eb77d6b0d68249e588bd5c5d20b3ee2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:12 [async_llm.py:261] Added request cmpl-eb77d6b0d68249e588bd5c5d20b3ee2f-0.
INFO 03-02 01:03:13 [logger.py:42] Received request cmpl-00465ba527114ccb9ad3b6e762d17719-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:13 [async_llm.py:261] Added request cmpl-00465ba527114ccb9ad3b6e762d17719-0.
INFO 03-02 01:03:14 [logger.py:42] Received request cmpl-dd9c34cc01a54c7badd09a0c3bb1a698-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:14 [async_llm.py:261] Added request cmpl-dd9c34cc01a54c7badd09a0c3bb1a698-0.
INFO 03-02 01:03:15 [logger.py:42] Received request cmpl-a0fc3964e5c7490f9314460cdfe31fa3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:15 [async_llm.py:261] Added request cmpl-a0fc3964e5c7490f9314460cdfe31fa3-0.
INFO 03-02 01:03:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:16 [logger.py:42] Received request cmpl-793f44b0d7764defa645a0bbdc2a4778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:16 [async_llm.py:261] Added request cmpl-793f44b0d7764defa645a0bbdc2a4778-0.
INFO 03-02 01:03:17 [logger.py:42] Received request cmpl-213e44e06e1f4a53956bcfecfd808dd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:17 [async_llm.py:261] Added request cmpl-213e44e06e1f4a53956bcfecfd808dd1-0.
INFO 03-02 01:03:18 [logger.py:42] Received request cmpl-e5c451f40be9493690152a39e3c9d6c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:18 [async_llm.py:261] Added request cmpl-e5c451f40be9493690152a39e3c9d6c1-0.
INFO 03-02 01:03:19 [logger.py:42] Received request cmpl-d4c9b5c00342486ab663de20953e8fd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:19 [async_llm.py:261] Added request cmpl-d4c9b5c00342486ab663de20953e8fd5-0.
INFO 03-02 01:03:21 [logger.py:42] Received request cmpl-46109d08d23741cd8c069bd5c9e0af90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:21 [async_llm.py:261] Added request cmpl-46109d08d23741cd8c069bd5c9e0af90-0.
INFO 03-02 01:03:22 [logger.py:42] Received request cmpl-170ff243fb1d4023b0af7e06bdc00e65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:22 [async_llm.py:261] Added request cmpl-170ff243fb1d4023b0af7e06bdc00e65-0.
INFO 03-02 01:03:23 [logger.py:42] Received request cmpl-c7f1092c29d64177aa25d51b0808ad02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:23 [async_llm.py:261] Added request cmpl-c7f1092c29d64177aa25d51b0808ad02-0.
INFO 03-02 01:03:24 [logger.py:42] Received request cmpl-f9991ef08ed04689b87ebb21ced31d6f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:24 [async_llm.py:261] Added request cmpl-f9991ef08ed04689b87ebb21ced31d6f-0.
INFO 03-02 01:03:25 [logger.py:42] Received request cmpl-7b158078938a4ca1946d62a694c27b6b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:25 [async_llm.py:261] Added request cmpl-7b158078938a4ca1946d62a694c27b6b-0.
INFO 03-02 01:03:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:26 [logger.py:42] Received request cmpl-629ddabc918f4330b5448296aa2e939b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:26 [async_llm.py:261] Added request cmpl-629ddabc918f4330b5448296aa2e939b-0.
INFO 03-02 01:03:27 [logger.py:42] Received request cmpl-fc18c8aec7654063bdb326218fd99158-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:27 [async_llm.py:261] Added request cmpl-fc18c8aec7654063bdb326218fd99158-0.
INFO 03-02 01:03:28 [logger.py:42] Received request cmpl-cf4a48e1ccaf4080be74785469d0f6ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:28 [async_llm.py:261] Added request cmpl-cf4a48e1ccaf4080be74785469d0f6ee-0.
INFO 03-02 01:03:29 [logger.py:42] Received request cmpl-e1400523befb42d0b3fc80b01f159362-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:29 [async_llm.py:261] Added request cmpl-e1400523befb42d0b3fc80b01f159362-0.
INFO 03-02 01:03:30 [logger.py:42] Received request cmpl-eec8c28483ef47eb8eb07c20d7f26a97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:30 [async_llm.py:261] Added request cmpl-eec8c28483ef47eb8eb07c20d7f26a97-0.
INFO 03-02 01:03:32 [logger.py:42] Received request cmpl-2c120227f9e1405da654c299084e7850-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:32 [async_llm.py:261] Added request cmpl-2c120227f9e1405da654c299084e7850-0.
INFO 03-02 01:03:33 [logger.py:42] Received request cmpl-6d0fe9fdb4c74f158b50a54ae5c98c99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:33 [async_llm.py:261] Added request cmpl-6d0fe9fdb4c74f158b50a54ae5c98c99-0.
INFO 03-02 01:03:34 [logger.py:42] Received request cmpl-b777659010b549de8b53f15d716fecf0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:34 [async_llm.py:261] Added request cmpl-b777659010b549de8b53f15d716fecf0-0.
INFO 03-02 01:03:35 [logger.py:42] Received request cmpl-fcfc4f0b4c824ea19bc93b25777344a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:35 [async_llm.py:261] Added request cmpl-fcfc4f0b4c824ea19bc93b25777344a6-0.
INFO 03-02 01:03:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:36 [logger.py:42] Received request cmpl-8c64c643986b4a1f99bfc5da79871b5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:36 [async_llm.py:261] Added request cmpl-8c64c643986b4a1f99bfc5da79871b5d-0.
INFO 03-02 01:03:37 [logger.py:42] Received request cmpl-5104d38f7bc44a5ebfbfd1b6652edcec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:37 [async_llm.py:261] Added request cmpl-5104d38f7bc44a5ebfbfd1b6652edcec-0.
INFO 03-02 01:03:38 [logger.py:42] Received request cmpl-7a3d11229b7f4422be7684dfee8b5658-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:38 [async_llm.py:261] Added request cmpl-7a3d11229b7f4422be7684dfee8b5658-0.
INFO 03-02 01:03:39 [logger.py:42] Received request cmpl-4b83d588567b4fdfae8013dfed1ceb80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:39 [async_llm.py:261] Added request cmpl-4b83d588567b4fdfae8013dfed1ceb80-0.
INFO 03-02 01:03:40 [logger.py:42] Received request cmpl-59f26bd6d1de4ebaa55e7f5751a445c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:40 [async_llm.py:261] Added request cmpl-59f26bd6d1de4ebaa55e7f5751a445c5-0.
INFO 03-02 01:03:41 [logger.py:42] Received request cmpl-b5e4da4fe1824bd082f88cc51debf739-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:41 [async_llm.py:261] Added request cmpl-b5e4da4fe1824bd082f88cc51debf739-0.
INFO 03-02 01:03:43 [logger.py:42] Received request cmpl-edf217705e144ae59b54ca32892e8a40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:43 [async_llm.py:261] Added request cmpl-edf217705e144ae59b54ca32892e8a40-0.
INFO 03-02 01:03:44 [logger.py:42] Received request cmpl-26cc9dd038f04dc586be66bc2d8fd55c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:44 [async_llm.py:261] Added request cmpl-26cc9dd038f04dc586be66bc2d8fd55c-0.
INFO 03-02 01:03:45 [logger.py:42] Received request cmpl-38610d8d801b4a439300d574af65d563-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:45 [async_llm.py:261] Added request cmpl-38610d8d801b4a439300d574af65d563-0.
INFO 03-02 01:03:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:46 [logger.py:42] Received request cmpl-9146f19eb92a4c80b84691ed01c4bc4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:46 [async_llm.py:261] Added request cmpl-9146f19eb92a4c80b84691ed01c4bc4d-0.
INFO 03-02 01:03:47 [logger.py:42] Received request cmpl-cd3725d126574a05890b039c8023195c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:47 [async_llm.py:261] Added request cmpl-cd3725d126574a05890b039c8023195c-0.
INFO 03-02 01:03:48 [logger.py:42] Received request cmpl-cd17d30760bc4a158e3b9d888db1fa61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:48 [async_llm.py:261] Added request cmpl-cd17d30760bc4a158e3b9d888db1fa61-0.
INFO 03-02 01:03:49 [logger.py:42] Received request cmpl-269f64e024da43a9be557fe547de5b98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:49 [async_llm.py:261] Added request cmpl-269f64e024da43a9be557fe547de5b98-0.
INFO 03-02 01:03:50 [logger.py:42] Received request cmpl-0a7b280368f1475095dd20418f4a77af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:50 [async_llm.py:261] Added request cmpl-0a7b280368f1475095dd20418f4a77af-0.
INFO 03-02 01:03:51 [logger.py:42] Received request cmpl-35b81ec5b2fe4a9d804b1b3d150b7629-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:51 [async_llm.py:261] Added request cmpl-35b81ec5b2fe4a9d804b1b3d150b7629-0.
INFO 03-02 01:03:52 [logger.py:42] Received request cmpl-b2794ed8bf18478b9a8148eccd5b1535-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:52 [async_llm.py:261] Added request cmpl-b2794ed8bf18478b9a8148eccd5b1535-0.
INFO 03-02 01:03:53 [logger.py:42] Received request cmpl-60cd7562daa54616a20040f92e27c280-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:53 [async_llm.py:261] Added request cmpl-60cd7562daa54616a20040f92e27c280-0.
INFO 03-02 01:03:55 [logger.py:42] Received request cmpl-af5263c904104cb5950c292b16780d20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:55 [async_llm.py:261] Added request cmpl-af5263c904104cb5950c292b16780d20-0.
INFO 03-02 01:03:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:03:56 [logger.py:42] Received request cmpl-d1fb43f66418476f9b7b6ed9965f8dc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:56 [async_llm.py:261] Added request cmpl-d1fb43f66418476f9b7b6ed9965f8dc3-0.
INFO 03-02 01:03:57 [logger.py:42] Received request cmpl-2dd05465173348fd964d2920b81695ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:57 [async_llm.py:261] Added request cmpl-2dd05465173348fd964d2920b81695ee-0.
INFO 03-02 01:03:58 [logger.py:42] Received request cmpl-42108df2dd134064a5b9f8b793e4c2d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:58 [async_llm.py:261] Added request cmpl-42108df2dd134064a5b9f8b793e4c2d7-0.
INFO 03-02 01:03:59 [logger.py:42] Received request cmpl-d68a52f036994fdeb565918ac5ae68a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:03:59 [async_llm.py:261] Added request cmpl-d68a52f036994fdeb565918ac5ae68a5-0.
INFO 03-02 01:04:00 [logger.py:42] Received request cmpl-aa10a60f27374dbc908d64c47c42e50d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:00 [async_llm.py:261] Added request cmpl-aa10a60f27374dbc908d64c47c42e50d-0.
INFO 03-02 01:04:01 [logger.py:42] Received request cmpl-4683e4a5b80f4a3eb4c4f8ba774063fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:01 [async_llm.py:261] Added request cmpl-4683e4a5b80f4a3eb4c4f8ba774063fb-0.
INFO 03-02 01:04:02 [logger.py:42] Received request cmpl-dce4b5330e894a8085046e056684e255-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:02 [async_llm.py:261] Added request cmpl-dce4b5330e894a8085046e056684e255-0.
INFO 03-02 01:04:03 [logger.py:42] Received request cmpl-2e310d62bc6846e78ca138565ae4ba3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:03 [async_llm.py:261] Added request cmpl-2e310d62bc6846e78ca138565ae4ba3a-0.
INFO 03-02 01:04:04 [logger.py:42] Received request cmpl-6d454318b6824723906dfd400315def2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:04 [async_llm.py:261] Added request cmpl-6d454318b6824723906dfd400315def2-0.
INFO 03-02 01:04:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:06 [logger.py:42] Received request cmpl-37e631ddb9ec4f5abbe29ac0b63c78b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:06 [async_llm.py:261] Added request cmpl-37e631ddb9ec4f5abbe29ac0b63c78b9-0.
INFO 03-02 01:04:07 [logger.py:42] Received request cmpl-b7663e880b1948d0b4b17dba49be5a1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:07 [async_llm.py:261] Added request cmpl-b7663e880b1948d0b4b17dba49be5a1f-0.
INFO 03-02 01:04:08 [logger.py:42] Received request cmpl-52b3ceea9b31453b9a9b44a0d6cf8875-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:08 [async_llm.py:261] Added request cmpl-52b3ceea9b31453b9a9b44a0d6cf8875-0.
INFO 03-02 01:04:09 [logger.py:42] Received request cmpl-cedb44042fc244a9a5e85205be94b15c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:09 [async_llm.py:261] Added request cmpl-cedb44042fc244a9a5e85205be94b15c-0.
INFO 03-02 01:04:10 [logger.py:42] Received request cmpl-8536bb7ecf0a466388ce36056f6ef4a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:10 [async_llm.py:261] Added request cmpl-8536bb7ecf0a466388ce36056f6ef4a9-0.
INFO 03-02 01:04:11 [logger.py:42] Received request cmpl-b9ba9d98df9343d7941980860608c1e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:11 [async_llm.py:261] Added request cmpl-b9ba9d98df9343d7941980860608c1e4-0.
INFO 03-02 01:04:12 [logger.py:42] Received request cmpl-abd9e22849404c0882fde7c9f7e66524-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:12 [async_llm.py:261] Added request cmpl-abd9e22849404c0882fde7c9f7e66524-0.
INFO 03-02 01:04:13 [logger.py:42] Received request cmpl-d12039cc9a9d4d8a9c498f82835b5e7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:13 [async_llm.py:261] Added request cmpl-d12039cc9a9d4d8a9c498f82835b5e7e-0.
INFO 03-02 01:04:14 [logger.py:42] Received request cmpl-ad50ee9c0420433f92ebff5e6b3927c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:14 [async_llm.py:261] Added request cmpl-ad50ee9c0420433f92ebff5e6b3927c5-0.
INFO 03-02 01:04:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:15 [logger.py:42] Received request cmpl-89d4a5fcb781494993230bd7dae82f5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:15 [async_llm.py:261] Added request cmpl-89d4a5fcb781494993230bd7dae82f5a-0.
INFO 03-02 01:04:16 [logger.py:42] Received request cmpl-84ac66bf95d0431ab70f7782271b2137-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:16 [async_llm.py:261] Added request cmpl-84ac66bf95d0431ab70f7782271b2137-0.
INFO 03-02 01:04:18 [logger.py:42] Received request cmpl-beeee9f28a6b485283f911a2e3a7d0a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:18 [async_llm.py:261] Added request cmpl-beeee9f28a6b485283f911a2e3a7d0a9-0.
INFO 03-02 01:04:19 [logger.py:42] Received request cmpl-3767648857794c139b1bf400ab40d503-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:19 [async_llm.py:261] Added request cmpl-3767648857794c139b1bf400ab40d503-0.
INFO 03-02 01:04:20 [logger.py:42] Received request cmpl-84c99baf723543079d45147b49fd8e43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:20 [async_llm.py:261] Added request cmpl-84c99baf723543079d45147b49fd8e43-0.
INFO 03-02 01:04:21 [logger.py:42] Received request cmpl-671aa13cb4284402910675d997c2b0d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:21 [async_llm.py:261] Added request cmpl-671aa13cb4284402910675d997c2b0d7-0.
INFO 03-02 01:04:22 [logger.py:42] Received request cmpl-019fd1fd18c3479d9e709817a9dac5a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:22 [async_llm.py:261] Added request cmpl-019fd1fd18c3479d9e709817a9dac5a5-0.
INFO 03-02 01:04:23 [logger.py:42] Received request cmpl-cc32d831e46947c38f5bc1177926c98d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:23 [async_llm.py:261] Added request cmpl-cc32d831e46947c38f5bc1177926c98d-0.
INFO 03-02 01:04:24 [logger.py:42] Received request cmpl-fdc02bf41e63424d9611fa2232597e63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:24 [async_llm.py:261] Added request cmpl-fdc02bf41e63424d9611fa2232597e63-0.
INFO 03-02 01:04:25 [logger.py:42] Received request cmpl-bdadb22af3154e3e9c09117abc6c18e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:25 [async_llm.py:261] Added request cmpl-bdadb22af3154e3e9c09117abc6c18e6-0.
INFO 03-02 01:04:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:26 [logger.py:42] Received request cmpl-a420ed4cac3548169413633997ffae9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:26 [async_llm.py:261] Added request cmpl-a420ed4cac3548169413633997ffae9e-0.
INFO 03-02 01:04:27 [logger.py:42] Received request cmpl-01850f5e342441f887a045f334250ea0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:27 [async_llm.py:261] Added request cmpl-01850f5e342441f887a045f334250ea0-0.
INFO 03-02 01:04:29 [logger.py:42] Received request cmpl-f0c5fec77fde420daf3474b85616df99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:29 [async_llm.py:261] Added request cmpl-f0c5fec77fde420daf3474b85616df99-0.
INFO 03-02 01:04:30 [logger.py:42] Received request cmpl-cdd258f3f49b4bbe80e9a7ea71aae4bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:30 [async_llm.py:261] Added request cmpl-cdd258f3f49b4bbe80e9a7ea71aae4bf-0.
INFO 03-02 01:04:31 [logger.py:42] Received request cmpl-66f9c71d180142fab16f9ce8fe5be880-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:31 [async_llm.py:261] Added request cmpl-66f9c71d180142fab16f9ce8fe5be880-0.
INFO 03-02 01:04:32 [logger.py:42] Received request cmpl-5f2de3fc22204c70999ddec5a24ad809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:32 [async_llm.py:261] Added request cmpl-5f2de3fc22204c70999ddec5a24ad809-0.
INFO 03-02 01:04:33 [logger.py:42] Received request cmpl-55f9f4349404435da91f9127706fe807-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:33 [async_llm.py:261] Added request cmpl-55f9f4349404435da91f9127706fe807-0.
INFO 03-02 01:04:34 [logger.py:42] Received request cmpl-0ea0990978ad4950bc9967962a5a1bcf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:34 [async_llm.py:261] Added request cmpl-0ea0990978ad4950bc9967962a5a1bcf-0.
INFO 03-02 01:04:35 [logger.py:42] Received request cmpl-895b4763287e4de3954c41c7b19bc811-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:35 [async_llm.py:261] Added request cmpl-895b4763287e4de3954c41c7b19bc811-0.
INFO 03-02 01:04:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:36 [logger.py:42] Received request cmpl-8f241b1951aa47c780fce6501c01b6b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:36 [async_llm.py:261] Added request cmpl-8f241b1951aa47c780fce6501c01b6b7-0.
INFO 03-02 01:04:37 [logger.py:42] Received request cmpl-67cf2c7ee77d4ad9a20cf70377093121-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:37 [async_llm.py:261] Added request cmpl-67cf2c7ee77d4ad9a20cf70377093121-0.
INFO 03-02 01:04:38 [logger.py:42] Received request cmpl-f8b99c357cdb4b21a3000d7757814eeb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:38 [async_llm.py:261] Added request cmpl-f8b99c357cdb4b21a3000d7757814eeb-0.
INFO 03-02 01:04:39 [logger.py:42] Received request cmpl-5bf2fa25732147b69bc55250ae1f38c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:39 [async_llm.py:261] Added request cmpl-5bf2fa25732147b69bc55250ae1f38c8-0.
INFO 03-02 01:04:41 [logger.py:42] Received request cmpl-bf5b2e0add714e298587335ed024cae3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:41 [async_llm.py:261] Added request cmpl-bf5b2e0add714e298587335ed024cae3-0.
INFO 03-02 01:04:42 [logger.py:42] Received request cmpl-4c99f242c17b4eb48697de6e82e09924-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:42 [async_llm.py:261] Added request cmpl-4c99f242c17b4eb48697de6e82e09924-0.
INFO 03-02 01:04:43 [logger.py:42] Received request cmpl-9fa152f058f4496891bb9005a31b387f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:43 [async_llm.py:261] Added request cmpl-9fa152f058f4496891bb9005a31b387f-0.
INFO 03-02 01:04:44 [logger.py:42] Received request cmpl-57b64c67feb746d18e90844dcd6cc44d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:44 [async_llm.py:261] Added request cmpl-57b64c67feb746d18e90844dcd6cc44d-0.
INFO 03-02 01:04:45 [logger.py:42] Received request cmpl-1049f3f554274933b1dc114097a80b93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:45 [async_llm.py:261] Added request cmpl-1049f3f554274933b1dc114097a80b93-0.
INFO 03-02 01:04:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:46 [logger.py:42] Received request cmpl-56cec188078443a797beacfb7c56c59a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:46 [async_llm.py:261] Added request cmpl-56cec188078443a797beacfb7c56c59a-0.
INFO 03-02 01:04:47 [logger.py:42] Received request cmpl-13a2f32c16da434288a6eeb6f674d356-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:47 [async_llm.py:261] Added request cmpl-13a2f32c16da434288a6eeb6f674d356-0.
INFO 03-02 01:04:48 [logger.py:42] Received request cmpl-277be2f13f44463c87a4448db56de639-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:48 [async_llm.py:261] Added request cmpl-277be2f13f44463c87a4448db56de639-0.
INFO 03-02 01:04:49 [logger.py:42] Received request cmpl-b99bcce506c242a197289e7066193558-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:49 [async_llm.py:261] Added request cmpl-b99bcce506c242a197289e7066193558-0.
INFO 03-02 01:04:50 [logger.py:42] Received request cmpl-33147b21a06042f5b9d2fa86ceba2aa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:50 [async_llm.py:261] Added request cmpl-33147b21a06042f5b9d2fa86ceba2aa7-0.
INFO 03-02 01:04:52 [logger.py:42] Received request cmpl-766e2df25df745b094c253306b4dd0e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:52 [async_llm.py:261] Added request cmpl-766e2df25df745b094c253306b4dd0e3-0.
INFO 03-02 01:04:53 [logger.py:42] Received request cmpl-deda216f006d4455a0bb296e2733e327-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:53 [async_llm.py:261] Added request cmpl-deda216f006d4455a0bb296e2733e327-0.
INFO 03-02 01:04:54 [logger.py:42] Received request cmpl-bd7e52dae9254b9b8a672fbab5e7c8bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:54 [async_llm.py:261] Added request cmpl-bd7e52dae9254b9b8a672fbab5e7c8bd-0.
INFO 03-02 01:04:55 [logger.py:42] Received request cmpl-e4af272c08ae4ae6973cee7a31a7734f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:55 [async_llm.py:261] Added request cmpl-e4af272c08ae4ae6973cee7a31a7734f-0.
INFO 03-02 01:04:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:04:56 [logger.py:42] Received request cmpl-5579543ee907451d8a0e93c37e985915-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:56 [async_llm.py:261] Added request cmpl-5579543ee907451d8a0e93c37e985915-0.
INFO 03-02 01:04:57 [logger.py:42] Received request cmpl-bae3490507f447e490397d5b6b833e82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:57 [async_llm.py:261] Added request cmpl-bae3490507f447e490397d5b6b833e82-0.
INFO 03-02 01:04:58 [logger.py:42] Received request cmpl-18d1fc8b18ec423f97f4664a42d14642-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:58 [async_llm.py:261] Added request cmpl-18d1fc8b18ec423f97f4664a42d14642-0.
INFO 03-02 01:04:59 [logger.py:42] Received request cmpl-65b47b752d084d98b1098002595d79d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:04:59 [async_llm.py:261] Added request cmpl-65b47b752d084d98b1098002595d79d5-0.
INFO 03-02 01:05:00 [logger.py:42] Received request cmpl-161dee51b93241ee8b59922c21160c99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:00 [async_llm.py:261] Added request cmpl-161dee51b93241ee8b59922c21160c99-0.
INFO 03-02 01:05:01 [logger.py:42] Received request cmpl-022103708e054a2ea44b6e3f4f42172b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:01 [async_llm.py:261] Added request cmpl-022103708e054a2ea44b6e3f4f42172b-0.
INFO 03-02 01:05:03 [logger.py:42] Received request cmpl-b08138598d1f4fd4a1703feaff42d65b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:03 [async_llm.py:261] Added request cmpl-b08138598d1f4fd4a1703feaff42d65b-0.
INFO 03-02 01:05:04 [logger.py:42] Received request cmpl-b3962fcd82cb45c586d5c10564dc80bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:04 [async_llm.py:261] Added request cmpl-b3962fcd82cb45c586d5c10564dc80bc-0.
INFO 03-02 01:05:05 [logger.py:42] Received request cmpl-f7db84c0e1504bf3a2831b4bbe504c7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:05 [async_llm.py:261] Added request cmpl-f7db84c0e1504bf3a2831b4bbe504c7c-0.
INFO 03-02 01:05:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:06 [logger.py:42] Received request cmpl-93dc4938ba3245c79f3cfc7fa16679a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:06 [async_llm.py:261] Added request cmpl-93dc4938ba3245c79f3cfc7fa16679a2-0.
INFO 03-02 01:05:07 [logger.py:42] Received request cmpl-e3a68aff89f24112959ea0deecd9f06e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:07 [async_llm.py:261] Added request cmpl-e3a68aff89f24112959ea0deecd9f06e-0.
INFO 03-02 01:05:08 [logger.py:42] Received request cmpl-db17c033197c45e49822878c7f317582-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:08 [async_llm.py:261] Added request cmpl-db17c033197c45e49822878c7f317582-0.
INFO 03-02 01:05:09 [logger.py:42] Received request cmpl-d736e16ada72453fb66290511434f7a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:09 [async_llm.py:261] Added request cmpl-d736e16ada72453fb66290511434f7a7-0.
INFO 03-02 01:05:10 [logger.py:42] Received request cmpl-d347ff2cd0bd42c3ac76a60512996c2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:10 [async_llm.py:261] Added request cmpl-d347ff2cd0bd42c3ac76a60512996c2b-0.
INFO 03-02 01:05:11 [logger.py:42] Received request cmpl-0f65c9643fc0483f8bc46586a3040bf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:11 [async_llm.py:261] Added request cmpl-0f65c9643fc0483f8bc46586a3040bf4-0.
INFO 03-02 01:05:12 [logger.py:42] Received request cmpl-4fb46d787a7f465b8ffecdb9181ed471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:12 [async_llm.py:261] Added request cmpl-4fb46d787a7f465b8ffecdb9181ed471-0.
INFO 03-02 01:05:13 [logger.py:42] Received request cmpl-fa21cb5d18424138978fd2baba0647ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:13 [async_llm.py:261] Added request cmpl-fa21cb5d18424138978fd2baba0647ee-0.
INFO 03-02 01:05:15 [logger.py:42] Received request cmpl-63d6ba719dd5418fbf363051702e7204-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:15 [async_llm.py:261] Added request cmpl-63d6ba719dd5418fbf363051702e7204-0.
INFO 03-02 01:05:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:16 [logger.py:42] Received request cmpl-d3eec685d54e4de393fa6d4e3e1fd047-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:16 [async_llm.py:261] Added request cmpl-d3eec685d54e4de393fa6d4e3e1fd047-0.
INFO 03-02 01:05:17 [logger.py:42] Received request cmpl-14f1fb64fbfa4bf58b731a3fd542815b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:17 [async_llm.py:261] Added request cmpl-14f1fb64fbfa4bf58b731a3fd542815b-0.
INFO 03-02 01:05:18 [logger.py:42] Received request cmpl-1545e5979e8a4323ab7a67ab68a2d00e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:18 [async_llm.py:261] Added request cmpl-1545e5979e8a4323ab7a67ab68a2d00e-0.
INFO 03-02 01:05:19 [logger.py:42] Received request cmpl-24823786c19d48fe8511bb9b14624dac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:19 [async_llm.py:261] Added request cmpl-24823786c19d48fe8511bb9b14624dac-0.
INFO 03-02 01:05:20 [logger.py:42] Received request cmpl-6cc623095b1e4f5aaf9e0cea7b8e6480-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:20 [async_llm.py:261] Added request cmpl-6cc623095b1e4f5aaf9e0cea7b8e6480-0.
INFO 03-02 01:05:21 [logger.py:42] Received request cmpl-53223a17ac234829ac433eabd4e6d2c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:21 [async_llm.py:261] Added request cmpl-53223a17ac234829ac433eabd4e6d2c2-0.
INFO 03-02 01:05:22 [logger.py:42] Received request cmpl-c1282ca031f64bfbb8bd24d82ee55a0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:22 [async_llm.py:261] Added request cmpl-c1282ca031f64bfbb8bd24d82ee55a0e-0.
INFO 03-02 01:05:23 [logger.py:42] Received request cmpl-4781973bab7a4e4194b5815798860b1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:23 [async_llm.py:261] Added request cmpl-4781973bab7a4e4194b5815798860b1d-0.
INFO 03-02 01:05:24 [logger.py:42] Received request cmpl-c64e34238bec4e5ca7e303ab662a4a2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:24 [async_llm.py:261] Added request cmpl-c64e34238bec4e5ca7e303ab662a4a2e-0.
INFO 03-02 01:05:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:26 [logger.py:42] Received request cmpl-6847496de0554cc5a9fc21eafb900435-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:26 [async_llm.py:261] Added request cmpl-6847496de0554cc5a9fc21eafb900435-0.
INFO 03-02 01:05:27 [logger.py:42] Received request cmpl-2979dd343e6f4fc1b51d93442963f09b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:27 [async_llm.py:261] Added request cmpl-2979dd343e6f4fc1b51d93442963f09b-0.
INFO 03-02 01:05:28 [logger.py:42] Received request cmpl-ab2f105e85d94bbd9ba96085b6cfe824-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:28 [async_llm.py:261] Added request cmpl-ab2f105e85d94bbd9ba96085b6cfe824-0.
INFO 03-02 01:05:29 [logger.py:42] Received request cmpl-2e7ffdc705844b4eb03fdf4c0c2d7b3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:29 [async_llm.py:261] Added request cmpl-2e7ffdc705844b4eb03fdf4c0c2d7b3f-0.
INFO 03-02 01:05:30 [logger.py:42] Received request cmpl-c626c745aeb44095b3dea2bfa04bd441-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:30 [async_llm.py:261] Added request cmpl-c626c745aeb44095b3dea2bfa04bd441-0.
INFO 03-02 01:05:31 [logger.py:42] Received request cmpl-eaea0784deb04087879a4bab1565b802-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:31 [async_llm.py:261] Added request cmpl-eaea0784deb04087879a4bab1565b802-0.
INFO 03-02 01:05:32 [logger.py:42] Received request cmpl-0cbfd8d57249426287309bf8ca0b1629-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:32 [async_llm.py:261] Added request cmpl-0cbfd8d57249426287309bf8ca0b1629-0.
INFO 03-02 01:05:33 [logger.py:42] Received request cmpl-e705bce1e1744947a3f5e95f666beb57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:33 [async_llm.py:261] Added request cmpl-e705bce1e1744947a3f5e95f666beb57-0.
INFO 03-02 01:05:34 [logger.py:42] Received request cmpl-66d49e1aad5b40b1bc2f271da22c1c82-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:34 [async_llm.py:261] Added request cmpl-66d49e1aad5b40b1bc2f271da22c1c82-0.
INFO 03-02 01:05:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:35 [logger.py:42] Received request cmpl-90d8e8fb93bb4f3aa41c8cef1b02e47e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:35 [async_llm.py:261] Added request cmpl-90d8e8fb93bb4f3aa41c8cef1b02e47e-0.
INFO 03-02 01:05:36 [logger.py:42] Received request cmpl-0f9f90ad50414a1a9cf5aff28982e648-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:36 [async_llm.py:261] Added request cmpl-0f9f90ad50414a1a9cf5aff28982e648-0.
INFO 03-02 01:05:38 [logger.py:42] Received request cmpl-52861bb6b2b045ea9613022a6d762576-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:38 [async_llm.py:261] Added request cmpl-52861bb6b2b045ea9613022a6d762576-0.
INFO 03-02 01:05:39 [logger.py:42] Received request cmpl-da4d946d7f2d4b6cb4b80e7e1e07a1f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:39 [async_llm.py:261] Added request cmpl-da4d946d7f2d4b6cb4b80e7e1e07a1f8-0.
INFO 03-02 01:05:40 [logger.py:42] Received request cmpl-8c17ed6056064219ad047706c1e344aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:40 [async_llm.py:261] Added request cmpl-8c17ed6056064219ad047706c1e344aa-0.
INFO 03-02 01:05:41 [logger.py:42] Received request cmpl-fdc1e21ffb0b45ed9b9359661d7f07d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:41 [async_llm.py:261] Added request cmpl-fdc1e21ffb0b45ed9b9359661d7f07d8-0.
INFO 03-02 01:05:42 [logger.py:42] Received request cmpl-6da5f4e8ee3640a680e94dc071460704-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:42 [async_llm.py:261] Added request cmpl-6da5f4e8ee3640a680e94dc071460704-0.
INFO 03-02 01:05:43 [logger.py:42] Received request cmpl-10836a4e6c0441d7bc062808d1f9a5e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:43 [async_llm.py:261] Added request cmpl-10836a4e6c0441d7bc062808d1f9a5e4-0.
INFO 03-02 01:05:44 [logger.py:42] Received request cmpl-8fc671ee5a3d4c76a1ef04d064a5e866-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:44 [async_llm.py:261] Added request cmpl-8fc671ee5a3d4c76a1ef04d064a5e866-0.
INFO 03-02 01:05:45 [logger.py:42] Received request cmpl-8fada9bbaa6c42e8a6da1c94f7ce8ef4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:45 [async_llm.py:261] Added request cmpl-8fada9bbaa6c42e8a6da1c94f7ce8ef4-0.
INFO 03-02 01:05:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:46 [logger.py:42] Received request cmpl-6a509fad4a3a4598b03287f063cd59a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:46 [async_llm.py:261] Added request cmpl-6a509fad4a3a4598b03287f063cd59a7-0.
INFO 03-02 01:05:47 [logger.py:42] Received request cmpl-f73010fb6b084d21ac6a54e0d95ac617-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:47 [async_llm.py:261] Added request cmpl-f73010fb6b084d21ac6a54e0d95ac617-0.
INFO 03-02 01:05:49 [logger.py:42] Received request cmpl-786eac1f575248c6b89ed6af211e6044-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:49 [async_llm.py:261] Added request cmpl-786eac1f575248c6b89ed6af211e6044-0.
INFO 03-02 01:05:50 [logger.py:42] Received request cmpl-83b9478fbd0441f285defd8cac8f4f6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:50 [async_llm.py:261] Added request cmpl-83b9478fbd0441f285defd8cac8f4f6a-0.
INFO 03-02 01:05:51 [logger.py:42] Received request cmpl-185ab2ea07d4429499ef47c006fcb211-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:51 [async_llm.py:261] Added request cmpl-185ab2ea07d4429499ef47c006fcb211-0.
INFO 03-02 01:05:52 [logger.py:42] Received request cmpl-187d2fb95583433a892d092c8ff22f86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:52 [async_llm.py:261] Added request cmpl-187d2fb95583433a892d092c8ff22f86-0.
INFO 03-02 01:05:53 [logger.py:42] Received request cmpl-63044413a44540aa8811689cdfc0f202-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:53 [async_llm.py:261] Added request cmpl-63044413a44540aa8811689cdfc0f202-0.
INFO 03-02 01:05:54 [logger.py:42] Received request cmpl-f19f2c1141434cff86b9e7880988ebe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:54 [async_llm.py:261] Added request cmpl-f19f2c1141434cff86b9e7880988ebe0-0.
INFO 03-02 01:05:55 [logger.py:42] Received request cmpl-adba31dea099427a9cc4c30bb93d014b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:55 [async_llm.py:261] Added request cmpl-adba31dea099427a9cc4c30bb93d014b-0.
INFO 03-02 01:05:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:05:56 [logger.py:42] Received request cmpl-c88c8d36a67f471c982ea9761a853e86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:56 [async_llm.py:261] Added request cmpl-c88c8d36a67f471c982ea9761a853e86-0.
INFO 03-02 01:05:57 [logger.py:42] Received request cmpl-6938a938f00a425687d9c7ef5da991cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:57 [async_llm.py:261] Added request cmpl-6938a938f00a425687d9c7ef5da991cf-0.
INFO 03-02 01:05:58 [logger.py:42] Received request cmpl-a2efd89fa09042efb5b1ea0e846e43a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:58 [async_llm.py:261] Added request cmpl-a2efd89fa09042efb5b1ea0e846e43a0-0.
INFO 03-02 01:05:59 [logger.py:42] Received request cmpl-553f60245b864edaafe0430f8d087e2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:05:59 [async_llm.py:261] Added request cmpl-553f60245b864edaafe0430f8d087e2d-0.
INFO 03-02 01:06:01 [logger.py:42] Received request cmpl-6f67a398b8704f1896924403d6b829c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:01 [async_llm.py:261] Added request cmpl-6f67a398b8704f1896924403d6b829c0-0.
INFO 03-02 01:06:02 [logger.py:42] Received request cmpl-9088fb274a264653ad55db576ffe05ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:02 [async_llm.py:261] Added request cmpl-9088fb274a264653ad55db576ffe05ec-0.
INFO 03-02 01:06:03 [logger.py:42] Received request cmpl-65e5252975a84985a6dc28702ad23e6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:03 [async_llm.py:261] Added request cmpl-65e5252975a84985a6dc28702ad23e6a-0.
INFO 03-02 01:06:04 [logger.py:42] Received request cmpl-5562d7f173c44e3682136daae86487c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:04 [async_llm.py:261] Added request cmpl-5562d7f173c44e3682136daae86487c2-0.
INFO 03-02 01:06:05 [logger.py:42] Received request cmpl-07c8076c94ab47708ced8bca5365dcb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:05 [async_llm.py:261] Added request cmpl-07c8076c94ab47708ced8bca5365dcb8-0.
INFO 03-02 01:06:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:06 [logger.py:42] Received request cmpl-12fb4595a1fc40a1b3c4c15ec58ffc32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:06 [async_llm.py:261] Added request cmpl-12fb4595a1fc40a1b3c4c15ec58ffc32-0.
INFO 03-02 01:06:07 [logger.py:42] Received request cmpl-fd0dfac58224473fb3cd5eeb8f76b69d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:07 [async_llm.py:261] Added request cmpl-fd0dfac58224473fb3cd5eeb8f76b69d-0.
INFO 03-02 01:06:08 [logger.py:42] Received request cmpl-5304cf1a88a34fb8aec6842b9ca1151c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:08 [async_llm.py:261] Added request cmpl-5304cf1a88a34fb8aec6842b9ca1151c-0.
INFO 03-02 01:06:09 [logger.py:42] Received request cmpl-fe21ff9618b740ebb3ad8e220840f029-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:09 [async_llm.py:261] Added request cmpl-fe21ff9618b740ebb3ad8e220840f029-0.
INFO 03-02 01:06:10 [logger.py:42] Received request cmpl-30cdfa40d1f5437e8d7a69de78af6310-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:10 [async_llm.py:261] Added request cmpl-30cdfa40d1f5437e8d7a69de78af6310-0.
INFO 03-02 01:06:12 [logger.py:42] Received request cmpl-6fba7d00db3049709b42f7f0acaba4fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:12 [async_llm.py:261] Added request cmpl-6fba7d00db3049709b42f7f0acaba4fa-0.
INFO 03-02 01:06:13 [logger.py:42] Received request cmpl-ec676895971c422bba251d16e8c6336e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:13 [async_llm.py:261] Added request cmpl-ec676895971c422bba251d16e8c6336e-0.
INFO 03-02 01:06:14 [logger.py:42] Received request cmpl-4b1bb0b189d44d8fa81e022d6a903c13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:14 [async_llm.py:261] Added request cmpl-4b1bb0b189d44d8fa81e022d6a903c13-0.
INFO 03-02 01:06:15 [logger.py:42] Received request cmpl-a31011d18078448e9f93743c87085410-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:15 [async_llm.py:261] Added request cmpl-a31011d18078448e9f93743c87085410-0.
INFO 03-02 01:06:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:16 [logger.py:42] Received request cmpl-f6065e505dc14063bbd14302f0a132de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:16 [async_llm.py:261] Added request cmpl-f6065e505dc14063bbd14302f0a132de-0.
INFO 03-02 01:06:17 [logger.py:42] Received request cmpl-7ac72c705e1449aba8430c07ba343b4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:17 [async_llm.py:261] Added request cmpl-7ac72c705e1449aba8430c07ba343b4a-0.
INFO 03-02 01:06:18 [logger.py:42] Received request cmpl-ca55d0192d864022b3562e1bab0c8a18-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:18 [async_llm.py:261] Added request cmpl-ca55d0192d864022b3562e1bab0c8a18-0.
INFO 03-02 01:06:19 [logger.py:42] Received request cmpl-884d8227f356444da773769f50e5d228-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:19 [async_llm.py:261] Added request cmpl-884d8227f356444da773769f50e5d228-0.
INFO 03-02 01:06:20 [logger.py:42] Received request cmpl-f7a8d252715348538046429991697104-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:20 [async_llm.py:261] Added request cmpl-f7a8d252715348538046429991697104-0.
INFO 03-02 01:06:21 [logger.py:42] Received request cmpl-fc7602831c9f4d108c8429e8ebf3e710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:21 [async_llm.py:261] Added request cmpl-fc7602831c9f4d108c8429e8ebf3e710-0.
INFO 03-02 01:06:22 [logger.py:42] Received request cmpl-5dea587535a04cb790fae80d40df5219-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:22 [async_llm.py:261] Added request cmpl-5dea587535a04cb790fae80d40df5219-0.
INFO 03-02 01:06:24 [logger.py:42] Received request cmpl-0b9154f7ddd84006a85d15e6cb7fa920-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:24 [async_llm.py:261] Added request cmpl-0b9154f7ddd84006a85d15e6cb7fa920-0.
INFO 03-02 01:06:25 [logger.py:42] Received request cmpl-d5e1feef7da44a39877697c23120c826-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:25 [async_llm.py:261] Added request cmpl-d5e1feef7da44a39877697c23120c826-0.
INFO 03-02 01:06:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:26 [logger.py:42] Received request cmpl-e6ba0862f65f4ffabc7f0f1591832a86-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:26 [async_llm.py:261] Added request cmpl-e6ba0862f65f4ffabc7f0f1591832a86-0.
INFO 03-02 01:06:27 [logger.py:42] Received request cmpl-d4ed75b60d1e4d31a1d2c638af78102f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:27 [async_llm.py:261] Added request cmpl-d4ed75b60d1e4d31a1d2c638af78102f-0.
INFO 03-02 01:06:28 [logger.py:42] Received request cmpl-ffd986d4e2884e19a2157318c42b6e97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:28 [async_llm.py:261] Added request cmpl-ffd986d4e2884e19a2157318c42b6e97-0.
INFO 03-02 01:06:29 [logger.py:42] Received request cmpl-34e89221cbf142a89446d8d075ad13ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:29 [async_llm.py:261] Added request cmpl-34e89221cbf142a89446d8d075ad13ba-0.
INFO 03-02 01:06:30 [logger.py:42] Received request cmpl-23013eaa37cc49008fe040eed7e23a17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:30 [async_llm.py:261] Added request cmpl-23013eaa37cc49008fe040eed7e23a17-0.
INFO 03-02 01:06:31 [logger.py:42] Received request cmpl-0af7ba70e86542e19006457cf0050830-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:31 [async_llm.py:261] Added request cmpl-0af7ba70e86542e19006457cf0050830-0.
INFO 03-02 01:06:32 [logger.py:42] Received request cmpl-49556c4bef19434897be5c047493984b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:32 [async_llm.py:261] Added request cmpl-49556c4bef19434897be5c047493984b-0.
INFO 03-02 01:06:33 [logger.py:42] Received request cmpl-79669ed9d77f418995d32598b3d4bfa6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:33 [async_llm.py:261] Added request cmpl-79669ed9d77f418995d32598b3d4bfa6-0.
INFO 03-02 01:06:35 [logger.py:42] Received request cmpl-9275b0af688041da8acf3ac5f4c286f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:35 [async_llm.py:261] Added request cmpl-9275b0af688041da8acf3ac5f4c286f1-0.
INFO 03-02 01:06:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:36 [logger.py:42] Received request cmpl-7d001947b7714c26a3f379555b1db2bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:36 [async_llm.py:261] Added request cmpl-7d001947b7714c26a3f379555b1db2bb-0.
INFO 03-02 01:06:37 [logger.py:42] Received request cmpl-7e11395b4f7b4e72a72a1608247a54a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:37 [async_llm.py:261] Added request cmpl-7e11395b4f7b4e72a72a1608247a54a8-0.
INFO 03-02 01:06:38 [logger.py:42] Received request cmpl-d167575b850d46558cde07a14a42b66e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:38 [async_llm.py:261] Added request cmpl-d167575b850d46558cde07a14a42b66e-0.
INFO 03-02 01:06:39 [logger.py:42] Received request cmpl-7410e717f3ca4284894e01d235805202-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:39 [async_llm.py:261] Added request cmpl-7410e717f3ca4284894e01d235805202-0.
INFO 03-02 01:06:40 [logger.py:42] Received request cmpl-bb43ab6249514bda92c6d780e6baeade-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:40 [async_llm.py:261] Added request cmpl-bb43ab6249514bda92c6d780e6baeade-0.
INFO 03-02 01:06:41 [logger.py:42] Received request cmpl-61a6c0c6bdd14912a391c4293ac18c05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:41 [async_llm.py:261] Added request cmpl-61a6c0c6bdd14912a391c4293ac18c05-0.
INFO 03-02 01:06:42 [logger.py:42] Received request cmpl-62e1a6bb0c754e89981c5b240770086f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:42 [async_llm.py:261] Added request cmpl-62e1a6bb0c754e89981c5b240770086f-0.
INFO 03-02 01:06:43 [logger.py:42] Received request cmpl-f26b66a582bd4c76ba8651459d5f7c71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:43 [async_llm.py:261] Added request cmpl-f26b66a582bd4c76ba8651459d5f7c71-0.
INFO 03-02 01:06:44 [logger.py:42] Received request cmpl-04e0b24eea76437589adb7e079f2b099-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:44 [async_llm.py:261] Added request cmpl-04e0b24eea76437589adb7e079f2b099-0.
INFO 03-02 01:06:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:46 [logger.py:42] Received request cmpl-6ef182c843b442cfb95ab06e340bbbf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:46 [async_llm.py:261] Added request cmpl-6ef182c843b442cfb95ab06e340bbbf6-0.
INFO 03-02 01:06:47 [logger.py:42] Received request cmpl-5ce89ea36b5b472e8fb44ea2432e1762-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:47 [async_llm.py:261] Added request cmpl-5ce89ea36b5b472e8fb44ea2432e1762-0.
INFO 03-02 01:06:48 [logger.py:42] Received request cmpl-dfa7742664404486b7effde1cb2df9f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:48 [async_llm.py:261] Added request cmpl-dfa7742664404486b7effde1cb2df9f1-0.
INFO 03-02 01:06:49 [logger.py:42] Received request cmpl-a40cf3e36ce14a2681fdb57739fb0eef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:49 [async_llm.py:261] Added request cmpl-a40cf3e36ce14a2681fdb57739fb0eef-0.
INFO 03-02 01:06:50 [logger.py:42] Received request cmpl-798ac6d9744c4e3ca0ed2ce574a716df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:50 [async_llm.py:261] Added request cmpl-798ac6d9744c4e3ca0ed2ce574a716df-0.
INFO 03-02 01:06:51 [logger.py:42] Received request cmpl-1df66c1f73014a4b97713b34b3556620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:51 [async_llm.py:261] Added request cmpl-1df66c1f73014a4b97713b34b3556620-0.
INFO 03-02 01:06:52 [logger.py:42] Received request cmpl-c48c682520804507b86cdf1ab49962f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:52 [async_llm.py:261] Added request cmpl-c48c682520804507b86cdf1ab49962f9-0.
INFO 03-02 01:06:53 [logger.py:42] Received request cmpl-641a9e11eb094281b6ea8665a9911273-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:53 [async_llm.py:261] Added request cmpl-641a9e11eb094281b6ea8665a9911273-0.
INFO 03-02 01:06:54 [logger.py:42] Received request cmpl-63bfe604890243e2979a0f7570e75354-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:54 [async_llm.py:261] Added request cmpl-63bfe604890243e2979a0f7570e75354-0.
INFO 03-02 01:06:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:06:55 [logger.py:42] Received request cmpl-070f9f4df7844fb2bb66f411930555e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:55 [async_llm.py:261] Added request cmpl-070f9f4df7844fb2bb66f411930555e4-0.
INFO 03-02 01:06:56 [logger.py:42] Received request cmpl-a3d7529eb0974ba69d30e8389e0a605c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:56 [async_llm.py:261] Added request cmpl-a3d7529eb0974ba69d30e8389e0a605c-0.
INFO 03-02 01:06:58 [logger.py:42] Received request cmpl-b00906fb3cd94951b347c9af8fbe46a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:58 [async_llm.py:261] Added request cmpl-b00906fb3cd94951b347c9af8fbe46a4-0.
INFO 03-02 01:06:59 [logger.py:42] Received request cmpl-a8a4d4102f2b46de881187c81c544448-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:06:59 [async_llm.py:261] Added request cmpl-a8a4d4102f2b46de881187c81c544448-0.
INFO 03-02 01:07:00 [logger.py:42] Received request cmpl-96607057be8e4fd5bf94507243ba658d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:00 [async_llm.py:261] Added request cmpl-96607057be8e4fd5bf94507243ba658d-0.
INFO 03-02 01:07:01 [logger.py:42] Received request cmpl-e922fb385131480ca8ee511e17c3a153-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:01 [async_llm.py:261] Added request cmpl-e922fb385131480ca8ee511e17c3a153-0.
INFO 03-02 01:07:02 [logger.py:42] Received request cmpl-6ffc5d9ffaf145a0aa07fc20b3d4426b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:02 [async_llm.py:261] Added request cmpl-6ffc5d9ffaf145a0aa07fc20b3d4426b-0.
INFO 03-02 01:07:03 [logger.py:42] Received request cmpl-ea675cac9faa4222b7b147f0039f87bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:03 [async_llm.py:261] Added request cmpl-ea675cac9faa4222b7b147f0039f87bc-0.
INFO 03-02 01:07:04 [logger.py:42] Received request cmpl-b8bb97acc12044a2a5a1ed4ece8defe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:04 [async_llm.py:261] Added request cmpl-b8bb97acc12044a2a5a1ed4ece8defe0-0.
INFO 03-02 01:07:05 [logger.py:42] Received request cmpl-c3101fa5ee0741b480cc683ae6aac743-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:05 [async_llm.py:261] Added request cmpl-c3101fa5ee0741b480cc683ae6aac743-0.
INFO 03-02 01:07:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:06 [logger.py:42] Received request cmpl-6630ceb7f9c24f7fa035d6c331070363-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:06 [async_llm.py:261] Added request cmpl-6630ceb7f9c24f7fa035d6c331070363-0.
INFO 03-02 01:07:07 [logger.py:42] Received request cmpl-6685de55d94b4a3ea6ffbde423a86031-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:07 [async_llm.py:261] Added request cmpl-6685de55d94b4a3ea6ffbde423a86031-0.
INFO 03-02 01:07:09 [logger.py:42] Received request cmpl-4142982f592948b396215dc75270bbaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:09 [async_llm.py:261] Added request cmpl-4142982f592948b396215dc75270bbaa-0.
INFO 03-02 01:07:10 [logger.py:42] Received request cmpl-41fe9f77587042ad805a6142b9eded52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:10 [async_llm.py:261] Added request cmpl-41fe9f77587042ad805a6142b9eded52-0.
INFO 03-02 01:07:11 [logger.py:42] Received request cmpl-0b396e59622e4cbdb8ccf67cd3766487-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:11 [async_llm.py:261] Added request cmpl-0b396e59622e4cbdb8ccf67cd3766487-0.
INFO 03-02 01:07:12 [logger.py:42] Received request cmpl-19400448123f416cac293f13dc48920b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:12 [async_llm.py:261] Added request cmpl-19400448123f416cac293f13dc48920b-0.
INFO 03-02 01:07:13 [logger.py:42] Received request cmpl-03ad1499b9ed4c42bc569110267aaa25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:13 [async_llm.py:261] Added request cmpl-03ad1499b9ed4c42bc569110267aaa25-0.
INFO 03-02 01:07:14 [logger.py:42] Received request cmpl-944c4b986fc647899755a490f315da8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:14 [async_llm.py:261] Added request cmpl-944c4b986fc647899755a490f315da8d-0.
INFO 03-02 01:07:15 [logger.py:42] Received request cmpl-4581e4f829aa4d37bdacbfd23500565a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:15 [async_llm.py:261] Added request cmpl-4581e4f829aa4d37bdacbfd23500565a-0.
INFO 03-02 01:07:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:16 [logger.py:42] Received request cmpl-b63baeac195d47dfa6667db345453ebb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:16 [async_llm.py:261] Added request cmpl-b63baeac195d47dfa6667db345453ebb-0.
INFO 03-02 01:07:17 [logger.py:42] Received request cmpl-5dad70bd79664e8a9e3bfb7dd6203115-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:17 [async_llm.py:261] Added request cmpl-5dad70bd79664e8a9e3bfb7dd6203115-0.
INFO 03-02 01:07:18 [logger.py:42] Received request cmpl-60fa003301b5459d9fe17016a3de5344-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:18 [async_llm.py:261] Added request cmpl-60fa003301b5459d9fe17016a3de5344-0.
INFO 03-02 01:07:19 [logger.py:42] Received request cmpl-4181d6f5919f49fdaf0423ef2083fa89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:19 [async_llm.py:261] Added request cmpl-4181d6f5919f49fdaf0423ef2083fa89-0.
INFO 03-02 01:07:21 [logger.py:42] Received request cmpl-3b7b384db67249c6b9ae492a0d939b42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:21 [async_llm.py:261] Added request cmpl-3b7b384db67249c6b9ae492a0d939b42-0.
INFO 03-02 01:07:22 [logger.py:42] Received request cmpl-35c8670a2d1748998daf4e58d6a8aa11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:22 [async_llm.py:261] Added request cmpl-35c8670a2d1748998daf4e58d6a8aa11-0.
INFO 03-02 01:07:23 [logger.py:42] Received request cmpl-49cd455b83604620b4a84af9435b5cc6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:23 [async_llm.py:261] Added request cmpl-49cd455b83604620b4a84af9435b5cc6-0.
INFO 03-02 01:07:24 [logger.py:42] Received request cmpl-967d1ee434d84ce582fff1088251a8ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:24 [async_llm.py:261] Added request cmpl-967d1ee434d84ce582fff1088251a8ec-0.
INFO 03-02 01:07:25 [logger.py:42] Received request cmpl-5c5cef32d35b45bab07dff96b53f6c72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:25 [async_llm.py:261] Added request cmpl-5c5cef32d35b45bab07dff96b53f6c72-0.
INFO 03-02 01:07:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:26 [logger.py:42] Received request cmpl-e6ac125819bd46c78b84d1defc9ed727-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:26 [async_llm.py:261] Added request cmpl-e6ac125819bd46c78b84d1defc9ed727-0.
INFO 03-02 01:07:27 [logger.py:42] Received request cmpl-e6c4f4cfd66147dcafd412f640284cad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:27 [async_llm.py:261] Added request cmpl-e6c4f4cfd66147dcafd412f640284cad-0.
INFO 03-02 01:07:28 [logger.py:42] Received request cmpl-08ccf956c4e14e1089655bd7148398ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:28 [async_llm.py:261] Added request cmpl-08ccf956c4e14e1089655bd7148398ef-0.
INFO 03-02 01:07:29 [logger.py:42] Received request cmpl-5f83d1345ac844bd867852787012b1ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:29 [async_llm.py:261] Added request cmpl-5f83d1345ac844bd867852787012b1ee-0.
INFO 03-02 01:07:30 [logger.py:42] Received request cmpl-67c935f5d7654f2e84d25f4fa276d42b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:30 [async_llm.py:261] Added request cmpl-67c935f5d7654f2e84d25f4fa276d42b-0.
INFO 03-02 01:07:32 [logger.py:42] Received request cmpl-be54d8c0bce7444b9674baf5b130bc0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:32 [async_llm.py:261] Added request cmpl-be54d8c0bce7444b9674baf5b130bc0a-0.
INFO 03-02 01:07:33 [logger.py:42] Received request cmpl-05818edf19f041f6924bb46a95236ad4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:33 [async_llm.py:261] Added request cmpl-05818edf19f041f6924bb46a95236ad4-0.
INFO 03-02 01:07:34 [logger.py:42] Received request cmpl-31592d630ee645eda266278dbd64dda4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:34 [async_llm.py:261] Added request cmpl-31592d630ee645eda266278dbd64dda4-0.
INFO 03-02 01:07:35 [logger.py:42] Received request cmpl-d5a8aa37e06a40de82341f87f2127897-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:35 [async_llm.py:261] Added request cmpl-d5a8aa37e06a40de82341f87f2127897-0.
INFO 03-02 01:07:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:36 [logger.py:42] Received request cmpl-0bf29b8030c44aa6ac41e39fc477f615-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:36 [async_llm.py:261] Added request cmpl-0bf29b8030c44aa6ac41e39fc477f615-0.
INFO 03-02 01:07:37 [logger.py:42] Received request cmpl-7c80629b8fdd4f4da63cb3ed6d640c90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:37 [async_llm.py:261] Added request cmpl-7c80629b8fdd4f4da63cb3ed6d640c90-0.
INFO 03-02 01:07:38 [logger.py:42] Received request cmpl-db6096a7a5b24d4085386a26d7500992-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:38 [async_llm.py:261] Added request cmpl-db6096a7a5b24d4085386a26d7500992-0.
INFO 03-02 01:07:39 [logger.py:42] Received request cmpl-27f790e2a8944524a8c51dbda89e514d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:39 [async_llm.py:261] Added request cmpl-27f790e2a8944524a8c51dbda89e514d-0.
INFO 03-02 01:07:40 [logger.py:42] Received request cmpl-62db08e276044fba81cfa0cf700ef92a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:40 [async_llm.py:261] Added request cmpl-62db08e276044fba81cfa0cf700ef92a-0.
INFO 03-02 01:07:41 [logger.py:42] Received request cmpl-65f03256a11f4ccb9f3cc82bfecb7b1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:41 [async_llm.py:261] Added request cmpl-65f03256a11f4ccb9f3cc82bfecb7b1a-0.
INFO 03-02 01:07:42 [logger.py:42] Received request cmpl-181ca3adee39416285ec4e9e65e5770a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:42 [async_llm.py:261] Added request cmpl-181ca3adee39416285ec4e9e65e5770a-0.
INFO 03-02 01:07:44 [logger.py:42] Received request cmpl-b5b9b4cb9dc14ea2949b42872b3579d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:44 [async_llm.py:261] Added request cmpl-b5b9b4cb9dc14ea2949b42872b3579d5-0.
INFO 03-02 01:07:45 [logger.py:42] Received request cmpl-4a295ef5e4024cbb87b0ed959cb2d340-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:45 [async_llm.py:261] Added request cmpl-4a295ef5e4024cbb87b0ed959cb2d340-0.
INFO 03-02 01:07:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:46 [logger.py:42] Received request cmpl-56d0d9d67e924107b745e398e2705d38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:46 [async_llm.py:261] Added request cmpl-56d0d9d67e924107b745e398e2705d38-0.
INFO 03-02 01:07:47 [logger.py:42] Received request cmpl-78291ffddfd946c5b9cdb0514b4da2ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:47 [async_llm.py:261] Added request cmpl-78291ffddfd946c5b9cdb0514b4da2ed-0.
INFO 03-02 01:07:48 [logger.py:42] Received request cmpl-16d44bcd9874485896c0d9599fc6fe97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:48 [async_llm.py:261] Added request cmpl-16d44bcd9874485896c0d9599fc6fe97-0.
INFO 03-02 01:07:49 [logger.py:42] Received request cmpl-2892dc03a7b74f32816f4f0aead72028-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:49 [async_llm.py:261] Added request cmpl-2892dc03a7b74f32816f4f0aead72028-0.
INFO 03-02 01:07:50 [logger.py:42] Received request cmpl-0a19531a798140c4b5ba76fe89dbb9dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:50 [async_llm.py:261] Added request cmpl-0a19531a798140c4b5ba76fe89dbb9dc-0.
INFO 03-02 01:07:51 [logger.py:42] Received request cmpl-4b866523dcd24066beed1eab1404659a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:51 [async_llm.py:261] Added request cmpl-4b866523dcd24066beed1eab1404659a-0.
INFO 03-02 01:07:52 [logger.py:42] Received request cmpl-9a75d5326f944bac944454dbd252987b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:52 [async_llm.py:261] Added request cmpl-9a75d5326f944bac944454dbd252987b-0.
INFO 03-02 01:07:53 [logger.py:42] Received request cmpl-cea4b64aaecb47c89462ab37a248e4db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:53 [async_llm.py:261] Added request cmpl-cea4b64aaecb47c89462ab37a248e4db-0.
INFO 03-02 01:07:55 [logger.py:42] Received request cmpl-e763a6aa3440445bbb2218797518df3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:55 [async_llm.py:261] Added request cmpl-e763a6aa3440445bbb2218797518df3b-0.
INFO 03-02 01:07:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:07:56 [logger.py:42] Received request cmpl-d0376f4d52bd47a69ae00a2742535713-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:56 [async_llm.py:261] Added request cmpl-d0376f4d52bd47a69ae00a2742535713-0.
INFO 03-02 01:07:57 [logger.py:42] Received request cmpl-0fadf8516c224bb19060c61f75d5d721-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:57 [async_llm.py:261] Added request cmpl-0fadf8516c224bb19060c61f75d5d721-0.
INFO 03-02 01:07:58 [logger.py:42] Received request cmpl-ed78be166ea54dc485683e29103660da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:58 [async_llm.py:261] Added request cmpl-ed78be166ea54dc485683e29103660da-0.
INFO 03-02 01:07:59 [logger.py:42] Received request cmpl-34af90dd0d9848c182b94fd94ab780dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:07:59 [async_llm.py:261] Added request cmpl-34af90dd0d9848c182b94fd94ab780dd-0.
INFO 03-02 01:08:00 [logger.py:42] Received request cmpl-3b91f01cdc5a41e3a8e424a753bea038-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:00 [async_llm.py:261] Added request cmpl-3b91f01cdc5a41e3a8e424a753bea038-0.
INFO 03-02 01:08:01 [logger.py:42] Received request cmpl-47ae596406a840e0b99a3a18d38d8ff7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:01 [async_llm.py:261] Added request cmpl-47ae596406a840e0b99a3a18d38d8ff7-0.
INFO 03-02 01:08:02 [logger.py:42] Received request cmpl-3b72d948260744cb82cdc0a814dd0433-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:02 [async_llm.py:261] Added request cmpl-3b72d948260744cb82cdc0a814dd0433-0.
INFO 03-02 01:08:03 [logger.py:42] Received request cmpl-b8071e3ac53c410d81da34e5423f36ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:03 [async_llm.py:261] Added request cmpl-b8071e3ac53c410d81da34e5423f36ed-0.
INFO 03-02 01:08:04 [logger.py:42] Received request cmpl-3f566d15b7bc4385a9b3f21965eef9e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:04 [async_llm.py:261] Added request cmpl-3f566d15b7bc4385a9b3f21965eef9e6-0.
INFO 03-02 01:08:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:05 [logger.py:42] Received request cmpl-a774602446904bf8ac202586d9bff9a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:05 [async_llm.py:261] Added request cmpl-a774602446904bf8ac202586d9bff9a7-0.
INFO 03-02 01:08:07 [logger.py:42] Received request cmpl-8031a63884764cb9a932f860602ba577-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:07 [async_llm.py:261] Added request cmpl-8031a63884764cb9a932f860602ba577-0.
INFO 03-02 01:08:08 [logger.py:42] Received request cmpl-4f19a401d38c4f3b982ade7923b2dc5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:08 [async_llm.py:261] Added request cmpl-4f19a401d38c4f3b982ade7923b2dc5a-0.
INFO 03-02 01:08:09 [logger.py:42] Received request cmpl-6bad82469c5f487b8e258c3f87f0e4d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:09 [async_llm.py:261] Added request cmpl-6bad82469c5f487b8e258c3f87f0e4d4-0.
INFO 03-02 01:08:10 [logger.py:42] Received request cmpl-a47232699f924ee89e1f4e0954a338a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:10 [async_llm.py:261] Added request cmpl-a47232699f924ee89e1f4e0954a338a8-0.
INFO 03-02 01:08:11 [logger.py:42] Received request cmpl-b61ee384d4ef4d97a17a82e56d1d6e94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:11 [async_llm.py:261] Added request cmpl-b61ee384d4ef4d97a17a82e56d1d6e94-0.
INFO 03-02 01:08:12 [logger.py:42] Received request cmpl-3f491efcb0c443779795d7116643a648-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:12 [async_llm.py:261] Added request cmpl-3f491efcb0c443779795d7116643a648-0.
INFO 03-02 01:08:13 [logger.py:42] Received request cmpl-f35f46d016794fcdbe913a217ea05a47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:13 [async_llm.py:261] Added request cmpl-f35f46d016794fcdbe913a217ea05a47-0.
INFO 03-02 01:08:14 [logger.py:42] Received request cmpl-c2ebba95f8af45faaafa63ed10fafe76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:14 [async_llm.py:261] Added request cmpl-c2ebba95f8af45faaafa63ed10fafe76-0.
INFO 03-02 01:08:15 [logger.py:42] Received request cmpl-0c4c6c658f944185ad5ff94533c5378b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:15 [async_llm.py:261] Added request cmpl-0c4c6c658f944185ad5ff94533c5378b-0.
INFO 03-02 01:08:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:16 [logger.py:42] Received request cmpl-c571cae9f7b547c69bf129e31fe4435e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:16 [async_llm.py:261] Added request cmpl-c571cae9f7b547c69bf129e31fe4435e-0.
INFO 03-02 01:08:18 [logger.py:42] Received request cmpl-8e6adb70e20749e5b1f7f6d2b901068b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:18 [async_llm.py:261] Added request cmpl-8e6adb70e20749e5b1f7f6d2b901068b-0.
INFO 03-02 01:08:19 [logger.py:42] Received request cmpl-38a609d45ef34acfa751962886315bb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:19 [async_llm.py:261] Added request cmpl-38a609d45ef34acfa751962886315bb2-0.
INFO 03-02 01:08:20 [logger.py:42] Received request cmpl-bdb616fa7a1d4f53a6fa591520b5d79c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:20 [async_llm.py:261] Added request cmpl-bdb616fa7a1d4f53a6fa591520b5d79c-0.
INFO 03-02 01:08:21 [logger.py:42] Received request cmpl-9dee86fda17f4cd695c320ce2d9c06a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:21 [async_llm.py:261] Added request cmpl-9dee86fda17f4cd695c320ce2d9c06a1-0.
INFO 03-02 01:08:22 [logger.py:42] Received request cmpl-a63a3502bf134309ab00a861e5f6f8fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:22 [async_llm.py:261] Added request cmpl-a63a3502bf134309ab00a861e5f6f8fd-0.
INFO 03-02 01:08:23 [logger.py:42] Received request cmpl-5d447dd6727143aea65a237949eb6009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:23 [async_llm.py:261] Added request cmpl-5d447dd6727143aea65a237949eb6009-0.
INFO 03-02 01:08:24 [logger.py:42] Received request cmpl-5513501a208b4da18a280257f51e8443-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:24 [async_llm.py:261] Added request cmpl-5513501a208b4da18a280257f51e8443-0.
INFO 03-02 01:08:25 [logger.py:42] Received request cmpl-538202450471413e98384e4b9a6f16ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:25 [async_llm.py:261] Added request cmpl-538202450471413e98384e4b9a6f16ba-0.
INFO 03-02 01:08:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:26 [logger.py:42] Received request cmpl-69b1a55cb2ff409183b9bcdb96fe7715-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:26 [async_llm.py:261] Added request cmpl-69b1a55cb2ff409183b9bcdb96fe7715-0.
INFO 03-02 01:08:27 [logger.py:42] Received request cmpl-aa603a514bfe40a5af16fc5317973487-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:27 [async_llm.py:261] Added request cmpl-aa603a514bfe40a5af16fc5317973487-0.
INFO 03-02 01:08:28 [logger.py:42] Received request cmpl-a57f600db18c4878b226a898d9a71038-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:28 [async_llm.py:261] Added request cmpl-a57f600db18c4878b226a898d9a71038-0.
INFO 03-02 01:08:30 [logger.py:42] Received request cmpl-562187d20b4f4e3999ba55ca7f1dfd0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:30 [async_llm.py:261] Added request cmpl-562187d20b4f4e3999ba55ca7f1dfd0a-0.
INFO 03-02 01:08:31 [logger.py:42] Received request cmpl-2ca9907f76914f61a33cebcdfc86a65a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:31 [async_llm.py:261] Added request cmpl-2ca9907f76914f61a33cebcdfc86a65a-0.
INFO 03-02 01:08:32 [logger.py:42] Received request cmpl-612a2e8c24fc49669abf256a19cd1191-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:32 [async_llm.py:261] Added request cmpl-612a2e8c24fc49669abf256a19cd1191-0.
INFO 03-02 01:08:33 [logger.py:42] Received request cmpl-cc0c89d4434e4ab0af1e1ca1907ac358-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:33 [async_llm.py:261] Added request cmpl-cc0c89d4434e4ab0af1e1ca1907ac358-0.
INFO 03-02 01:08:34 [logger.py:42] Received request cmpl-e62642b502824c689607bf6634c62459-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:34 [async_llm.py:261] Added request cmpl-e62642b502824c689607bf6634c62459-0.
INFO 03-02 01:08:35 [logger.py:42] Received request cmpl-af8211f1c6ad454fb8fc847fb09f8ff2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:35 [async_llm.py:261] Added request cmpl-af8211f1c6ad454fb8fc847fb09f8ff2-0.
INFO 03-02 01:08:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:36 [logger.py:42] Received request cmpl-5705068308fc4470943bc8d68e5c7f37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:36 [async_llm.py:261] Added request cmpl-5705068308fc4470943bc8d68e5c7f37-0.
INFO 03-02 01:08:37 [logger.py:42] Received request cmpl-bf67e503ed6a4d0e987df029cdfc8c65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:37 [async_llm.py:261] Added request cmpl-bf67e503ed6a4d0e987df029cdfc8c65-0.
INFO 03-02 01:08:38 [logger.py:42] Received request cmpl-4754709100344fb49a903bf2e49061e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:38 [async_llm.py:261] Added request cmpl-4754709100344fb49a903bf2e49061e4-0.
INFO 03-02 01:08:39 [logger.py:42] Received request cmpl-f786cc43d62d474e801588c4ba14ccf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:39 [async_llm.py:261] Added request cmpl-f786cc43d62d474e801588c4ba14ccf7-0.
INFO 03-02 01:08:41 [logger.py:42] Received request cmpl-4bfe6742ee84473f9f366f9b473b39f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:41 [async_llm.py:261] Added request cmpl-4bfe6742ee84473f9f366f9b473b39f8-0.
INFO 03-02 01:08:42 [logger.py:42] Received request cmpl-59ce6e6399534ca7bf7eb9cb792340d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:42 [async_llm.py:261] Added request cmpl-59ce6e6399534ca7bf7eb9cb792340d2-0.
INFO 03-02 01:08:43 [logger.py:42] Received request cmpl-e0a49707d87043049d9cfcd2614e5ce5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:43 [async_llm.py:261] Added request cmpl-e0a49707d87043049d9cfcd2614e5ce5-0.
INFO 03-02 01:08:44 [logger.py:42] Received request cmpl-23da4523b06e4f2e95277aa37690ed7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:44 [async_llm.py:261] Added request cmpl-23da4523b06e4f2e95277aa37690ed7f-0.
INFO 03-02 01:08:45 [logger.py:42] Received request cmpl-a297da9e417f462f87b08b7813a8fa16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:45 [async_llm.py:261] Added request cmpl-a297da9e417f462f87b08b7813a8fa16-0.
INFO 03-02 01:08:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:46 [logger.py:42] Received request cmpl-0ae6b3017d01452b9f4cf228f1fda77d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:46 [async_llm.py:261] Added request cmpl-0ae6b3017d01452b9f4cf228f1fda77d-0.
INFO 03-02 01:08:47 [logger.py:42] Received request cmpl-484aa282f8fb4088813670ce55de828a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:47 [async_llm.py:261] Added request cmpl-484aa282f8fb4088813670ce55de828a-0.
INFO 03-02 01:08:48 [logger.py:42] Received request cmpl-df694b8d58e74951bd5666c48d11dd04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:48 [async_llm.py:261] Added request cmpl-df694b8d58e74951bd5666c48d11dd04-0.
INFO 03-02 01:08:49 [logger.py:42] Received request cmpl-356bdd03b60e47b1b0d99887afb8b75d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:49 [async_llm.py:261] Added request cmpl-356bdd03b60e47b1b0d99887afb8b75d-0.
INFO 03-02 01:08:50 [logger.py:42] Received request cmpl-58c804af812748199590bd6aeb5bfa13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:50 [async_llm.py:261] Added request cmpl-58c804af812748199590bd6aeb5bfa13-0.
INFO 03-02 01:08:51 [logger.py:42] Received request cmpl-3e034cdf3ed1490591badd5d2f8b535a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:51 [async_llm.py:261] Added request cmpl-3e034cdf3ed1490591badd5d2f8b535a-0.
INFO 03-02 01:08:53 [logger.py:42] Received request cmpl-d72ae781054c40fc9b286a62a68dce03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:53 [async_llm.py:261] Added request cmpl-d72ae781054c40fc9b286a62a68dce03-0.
INFO 03-02 01:08:54 [logger.py:42] Received request cmpl-00aeec081c5d42d48754eacca89a5aea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:54 [async_llm.py:261] Added request cmpl-00aeec081c5d42d48754eacca89a5aea-0.
INFO 03-02 01:08:55 [logger.py:42] Received request cmpl-b378ea8d99c741c3ad54ce043303f0bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:55 [async_llm.py:261] Added request cmpl-b378ea8d99c741c3ad54ce043303f0bd-0.
INFO 03-02 01:08:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:08:56 [logger.py:42] Received request cmpl-f2f61a5a700e4aaaaeeb3bd97aad87b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:56 [async_llm.py:261] Added request cmpl-f2f61a5a700e4aaaaeeb3bd97aad87b6-0.
INFO 03-02 01:08:57 [logger.py:42] Received request cmpl-dc85a082a23b4880ac38403655ba85a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:57 [async_llm.py:261] Added request cmpl-dc85a082a23b4880ac38403655ba85a3-0.
INFO 03-02 01:08:58 [logger.py:42] Received request cmpl-41fc52b476ab49359cb73974230b1481-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:58 [async_llm.py:261] Added request cmpl-41fc52b476ab49359cb73974230b1481-0.
INFO 03-02 01:08:59 [logger.py:42] Received request cmpl-0a44ae865bc2460a85f13e9b16df6f9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:08:59 [async_llm.py:261] Added request cmpl-0a44ae865bc2460a85f13e9b16df6f9d-0.
INFO 03-02 01:09:00 [logger.py:42] Received request cmpl-fbaf2f71996245d085a11ab3c358a8ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:00 [async_llm.py:261] Added request cmpl-fbaf2f71996245d085a11ab3c358a8ef-0.
INFO 03-02 01:09:01 [logger.py:42] Received request cmpl-a29e1f4b935040eaa2acbd624a17eec7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:01 [async_llm.py:261] Added request cmpl-a29e1f4b935040eaa2acbd624a17eec7-0.
INFO 03-02 01:09:02 [logger.py:42] Received request cmpl-60e16db1b958423d84070b53ea024503-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:02 [async_llm.py:261] Added request cmpl-60e16db1b958423d84070b53ea024503-0.
INFO 03-02 01:09:04 [logger.py:42] Received request cmpl-0a0610b1002046c8896515d7c671c000-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:04 [async_llm.py:261] Added request cmpl-0a0610b1002046c8896515d7c671c000-0.
INFO 03-02 01:09:05 [logger.py:42] Received request cmpl-76e24bed4ba3402ab6ad5db568b7e81a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:05 [async_llm.py:261] Added request cmpl-76e24bed4ba3402ab6ad5db568b7e81a-0.
INFO 03-02 01:09:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:06 [logger.py:42] Received request cmpl-a72d39d821d742dd992a8075912d635f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:06 [async_llm.py:261] Added request cmpl-a72d39d821d742dd992a8075912d635f-0.
INFO 03-02 01:09:07 [logger.py:42] Received request cmpl-a661bdd1ec754966a62236ce67269b4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:07 [async_llm.py:261] Added request cmpl-a661bdd1ec754966a62236ce67269b4e-0.
INFO 03-02 01:09:08 [logger.py:42] Received request cmpl-20aa479639524e7a8bc989c3cbf8db48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:08 [async_llm.py:261] Added request cmpl-20aa479639524e7a8bc989c3cbf8db48-0.
INFO 03-02 01:09:09 [logger.py:42] Received request cmpl-a6194752cd644fc28c2350a54b113e0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:09 [async_llm.py:261] Added request cmpl-a6194752cd644fc28c2350a54b113e0e-0.
INFO 03-02 01:09:10 [logger.py:42] Received request cmpl-a840d3ad1df2433db712785e356d6486-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:10 [async_llm.py:261] Added request cmpl-a840d3ad1df2433db712785e356d6486-0.
INFO 03-02 01:09:11 [logger.py:42] Received request cmpl-02d3eef60b72445bb4cefd4d8d91b3e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:11 [async_llm.py:261] Added request cmpl-02d3eef60b72445bb4cefd4d8d91b3e7-0.
INFO 03-02 01:09:12 [logger.py:42] Received request cmpl-54771a0303314436bb6551b87b3469c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:12 [async_llm.py:261] Added request cmpl-54771a0303314436bb6551b87b3469c7-0.
INFO 03-02 01:09:13 [logger.py:42] Received request cmpl-9de787470a164b829758a2c23b9a45d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:13 [async_llm.py:261] Added request cmpl-9de787470a164b829758a2c23b9a45d8-0.
INFO 03-02 01:09:15 [logger.py:42] Received request cmpl-38a5831764b6407ca09cd9145b925338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:15 [async_llm.py:261] Added request cmpl-38a5831764b6407ca09cd9145b925338-0.
INFO 03-02 01:09:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:16 [logger.py:42] Received request cmpl-047b760b847c4666879f69cf6cb22f79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:16 [async_llm.py:261] Added request cmpl-047b760b847c4666879f69cf6cb22f79-0.
INFO 03-02 01:09:17 [logger.py:42] Received request cmpl-c14a8f352af44503b4d586aba752eac1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:17 [async_llm.py:261] Added request cmpl-c14a8f352af44503b4d586aba752eac1-0.
INFO 03-02 01:09:18 [logger.py:42] Received request cmpl-723cf5c7e544403cbb589dbac168b6f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:18 [async_llm.py:261] Added request cmpl-723cf5c7e544403cbb589dbac168b6f2-0.
INFO 03-02 01:09:19 [logger.py:42] Received request cmpl-1db7e144075d44608d46d3ec3c09a15d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:19 [async_llm.py:261] Added request cmpl-1db7e144075d44608d46d3ec3c09a15d-0.
INFO 03-02 01:09:20 [logger.py:42] Received request cmpl-1056c9aaf9d649a7b6414ffb94d90980-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:20 [async_llm.py:261] Added request cmpl-1056c9aaf9d649a7b6414ffb94d90980-0.
INFO 03-02 01:09:21 [logger.py:42] Received request cmpl-8e9cd167c8864532ad208a98db6c3eae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:21 [async_llm.py:261] Added request cmpl-8e9cd167c8864532ad208a98db6c3eae-0.
INFO 03-02 01:09:22 [logger.py:42] Received request cmpl-1f0c0688cfe8427abdec8e1617349d7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:22 [async_llm.py:261] Added request cmpl-1f0c0688cfe8427abdec8e1617349d7a-0.
INFO 03-02 01:09:23 [logger.py:42] Received request cmpl-52921355b2104725bdd91141aa6165d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:23 [async_llm.py:261] Added request cmpl-52921355b2104725bdd91141aa6165d0-0.
INFO 03-02 01:09:24 [logger.py:42] Received request cmpl-4d949e53495041b1a6f53b25962ed421-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:24 [async_llm.py:261] Added request cmpl-4d949e53495041b1a6f53b25962ed421-0.
INFO 03-02 01:09:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:25 [logger.py:42] Received request cmpl-21e50d5e04894e719d580a298c012dec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:25 [async_llm.py:261] Added request cmpl-21e50d5e04894e719d580a298c012dec-0.
INFO 03-02 01:09:27 [logger.py:42] Received request cmpl-b3348c0f31824e978c13e226960e2a48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:27 [async_llm.py:261] Added request cmpl-b3348c0f31824e978c13e226960e2a48-0.
INFO 03-02 01:09:28 [logger.py:42] Received request cmpl-9f9a352bc24b4767aa0a7468ec0651b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:28 [async_llm.py:261] Added request cmpl-9f9a352bc24b4767aa0a7468ec0651b4-0.
INFO 03-02 01:09:29 [logger.py:42] Received request cmpl-e188b333ee1540868e8a5ec5fe24e075-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:29 [async_llm.py:261] Added request cmpl-e188b333ee1540868e8a5ec5fe24e075-0.
INFO 03-02 01:09:30 [logger.py:42] Received request cmpl-1dff68bddabb444c9e15caf89bb2edb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:30 [async_llm.py:261] Added request cmpl-1dff68bddabb444c9e15caf89bb2edb8-0.
INFO 03-02 01:09:31 [logger.py:42] Received request cmpl-6e568a59f8814e258e5a69bc9f2fbf63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:31 [async_llm.py:261] Added request cmpl-6e568a59f8814e258e5a69bc9f2fbf63-0.
INFO 03-02 01:09:32 [logger.py:42] Received request cmpl-ffe78f4f520b479387cf3a9cf8b2f1a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:32 [async_llm.py:261] Added request cmpl-ffe78f4f520b479387cf3a9cf8b2f1a6-0.
INFO 03-02 01:09:33 [logger.py:42] Received request cmpl-92631b436b5f42a59425fa343e2fced5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:33 [async_llm.py:261] Added request cmpl-92631b436b5f42a59425fa343e2fced5-0.
INFO 03-02 01:09:34 [logger.py:42] Received request cmpl-27cb1d1832b84fbe8ec37124c3264409-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:34 [async_llm.py:261] Added request cmpl-27cb1d1832b84fbe8ec37124c3264409-0.
INFO 03-02 01:09:35 [logger.py:42] Received request cmpl-aeb9667982794696a3785ed74a629c66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:35 [async_llm.py:261] Added request cmpl-aeb9667982794696a3785ed74a629c66-0.
INFO 03-02 01:09:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:36 [logger.py:42] Received request cmpl-1a035412b0754b688a0c7346aba63355-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:36 [async_llm.py:261] Added request cmpl-1a035412b0754b688a0c7346aba63355-0.
INFO 03-02 01:09:38 [logger.py:42] Received request cmpl-25f84472a4bc4950a9002f0cfa82b334-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:38 [async_llm.py:261] Added request cmpl-25f84472a4bc4950a9002f0cfa82b334-0.
INFO 03-02 01:09:39 [logger.py:42] Received request cmpl-a336d7e58a0f4bef81f48e1bff80c9a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:39 [async_llm.py:261] Added request cmpl-a336d7e58a0f4bef81f48e1bff80c9a4-0.
INFO 03-02 01:09:40 [logger.py:42] Received request cmpl-17cace560f6d421a873d4f5e725a7948-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:40 [async_llm.py:261] Added request cmpl-17cace560f6d421a873d4f5e725a7948-0.
INFO 03-02 01:09:41 [logger.py:42] Received request cmpl-47e7e9eae53c4383a54f206f494bf284-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:41 [async_llm.py:261] Added request cmpl-47e7e9eae53c4383a54f206f494bf284-0.
INFO 03-02 01:09:42 [logger.py:42] Received request cmpl-f954df9aa96c44b5bcb07d238f1eb83a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:42 [async_llm.py:261] Added request cmpl-f954df9aa96c44b5bcb07d238f1eb83a-0.
INFO 03-02 01:09:43 [logger.py:42] Received request cmpl-fa10b727a53c4171b3c501b318021ed4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:43 [async_llm.py:261] Added request cmpl-fa10b727a53c4171b3c501b318021ed4-0.
INFO 03-02 01:09:44 [logger.py:42] Received request cmpl-b63e23d0e6d542c08e20112b08f1129c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:44 [async_llm.py:261] Added request cmpl-b63e23d0e6d542c08e20112b08f1129c-0.
INFO 03-02 01:09:45 [logger.py:42] Received request cmpl-68137fa6865b45f086e6fa4d35f00471-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:45 [async_llm.py:261] Added request cmpl-68137fa6865b45f086e6fa4d35f00471-0.
INFO 03-02 01:09:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:46 [logger.py:42] Received request cmpl-597ab837792341b7bbb5dd9e0605606a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:46 [async_llm.py:261] Added request cmpl-597ab837792341b7bbb5dd9e0605606a-0.
INFO 03-02 01:09:47 [logger.py:42] Received request cmpl-ef6772d2e1cb478fb339a58217cd484e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:47 [async_llm.py:261] Added request cmpl-ef6772d2e1cb478fb339a58217cd484e-0.
INFO 03-02 01:09:48 [logger.py:42] Received request cmpl-8681a85847a7421b977b074e5b0df65b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:48 [async_llm.py:261] Added request cmpl-8681a85847a7421b977b074e5b0df65b-0.
INFO 03-02 01:09:50 [logger.py:42] Received request cmpl-689d2a1a470445de8db16984ae62fb37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:50 [async_llm.py:261] Added request cmpl-689d2a1a470445de8db16984ae62fb37-0.
INFO 03-02 01:09:51 [logger.py:42] Received request cmpl-b44b2cdf4be246078f7bf0be0bd1dfb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:51 [async_llm.py:261] Added request cmpl-b44b2cdf4be246078f7bf0be0bd1dfb6-0.
INFO 03-02 01:09:52 [logger.py:42] Received request cmpl-5132e37332f14b9a861d0b18926a02db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:52 [async_llm.py:261] Added request cmpl-5132e37332f14b9a861d0b18926a02db-0.
INFO 03-02 01:09:53 [logger.py:42] Received request cmpl-c382d852872249128421a531ca7470f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:53 [async_llm.py:261] Added request cmpl-c382d852872249128421a531ca7470f6-0.
INFO 03-02 01:09:54 [logger.py:42] Received request cmpl-6104c9c4167c496690efa02df44534b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:54 [async_llm.py:261] Added request cmpl-6104c9c4167c496690efa02df44534b5-0.
INFO 03-02 01:09:55 [logger.py:42] Received request cmpl-61e6bfce44c5402ca8d3778ed0ec68fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:55 [async_llm.py:261] Added request cmpl-61e6bfce44c5402ca8d3778ed0ec68fb-0.
INFO 03-02 01:09:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:09:56 [logger.py:42] Received request cmpl-ea065f6d26184837ae266b69641a30e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:56 [async_llm.py:261] Added request cmpl-ea065f6d26184837ae266b69641a30e2-0.
INFO 03-02 01:09:57 [logger.py:42] Received request cmpl-d1c2c5f205f14dc8a855a0f67f3533ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:57 [async_llm.py:261] Added request cmpl-d1c2c5f205f14dc8a855a0f67f3533ea-0.
INFO 03-02 01:09:58 [logger.py:42] Received request cmpl-fd105089dce440d08ac10a9aa38db20f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:58 [async_llm.py:261] Added request cmpl-fd105089dce440d08ac10a9aa38db20f-0.
INFO 03-02 01:09:59 [logger.py:42] Received request cmpl-95aae938af354cc3979c8f335e61ca48-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:09:59 [async_llm.py:261] Added request cmpl-95aae938af354cc3979c8f335e61ca48-0.
INFO 03-02 01:10:01 [logger.py:42] Received request cmpl-4680e29107a34f6faf024f7a4dacc152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:01 [async_llm.py:261] Added request cmpl-4680e29107a34f6faf024f7a4dacc152-0.
INFO 03-02 01:10:02 [logger.py:42] Received request cmpl-d94eeed5dbad4bafa5eb7c6c6c3f37f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:02 [async_llm.py:261] Added request cmpl-d94eeed5dbad4bafa5eb7c6c6c3f37f2-0.
INFO 03-02 01:10:03 [logger.py:42] Received request cmpl-5ed3f004ae5442ebb1e5f92d47b8c002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:03 [async_llm.py:261] Added request cmpl-5ed3f004ae5442ebb1e5f92d47b8c002-0.
INFO 03-02 01:10:04 [logger.py:42] Received request cmpl-5dad7642334f42378b5899de32a23261-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:04 [async_llm.py:261] Added request cmpl-5dad7642334f42378b5899de32a23261-0.
INFO 03-02 01:10:05 [logger.py:42] Received request cmpl-a9b7059e93554b5494581fd196b5ade4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:05 [async_llm.py:261] Added request cmpl-a9b7059e93554b5494581fd196b5ade4-0.
INFO 03-02 01:10:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:06 [logger.py:42] Received request cmpl-b43954d2a4c649cd8672549abdda20fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:06 [async_llm.py:261] Added request cmpl-b43954d2a4c649cd8672549abdda20fa-0.
INFO 03-02 01:10:07 [logger.py:42] Received request cmpl-a384a63376e44723baf35207af9b0e06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:07 [async_llm.py:261] Added request cmpl-a384a63376e44723baf35207af9b0e06-0.
INFO 03-02 01:10:08 [logger.py:42] Received request cmpl-4ea7644d57ff431197ea873d864d47dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:08 [async_llm.py:261] Added request cmpl-4ea7644d57ff431197ea873d864d47dd-0.
INFO 03-02 01:10:09 [logger.py:42] Received request cmpl-e1f4a4ef6f9c4fa0864decb9800badb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:09 [async_llm.py:261] Added request cmpl-e1f4a4ef6f9c4fa0864decb9800badb5-0.
INFO 03-02 01:10:10 [logger.py:42] Received request cmpl-d4d2a4223a7246da8c6a9b9763a97f4d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:10 [async_llm.py:261] Added request cmpl-d4d2a4223a7246da8c6a9b9763a97f4d-0.
INFO 03-02 01:10:11 [logger.py:42] Received request cmpl-ea1eac5c81e7456784987154a655aec9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:11 [async_llm.py:261] Added request cmpl-ea1eac5c81e7456784987154a655aec9-0.
INFO 03-02 01:10:13 [logger.py:42] Received request cmpl-3235c280bf9040dda3a38fe3c38aa3be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:13 [async_llm.py:261] Added request cmpl-3235c280bf9040dda3a38fe3c38aa3be-0.
INFO 03-02 01:10:14 [logger.py:42] Received request cmpl-3a96bc4c068f448384f5f5343b86f0d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:14 [async_llm.py:261] Added request cmpl-3a96bc4c068f448384f5f5343b86f0d7-0.
INFO 03-02 01:10:15 [logger.py:42] Received request cmpl-1b34648e647c499386f4ede761ae48f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:15 [async_llm.py:261] Added request cmpl-1b34648e647c499386f4ede761ae48f8-0.
INFO 03-02 01:10:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:16 [logger.py:42] Received request cmpl-3d38dc7ee60543cdb890653ead06b22a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:16 [async_llm.py:261] Added request cmpl-3d38dc7ee60543cdb890653ead06b22a-0.
INFO 03-02 01:10:17 [logger.py:42] Received request cmpl-ddc756cadf5b4071b3eaaaf917d14f1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:17 [async_llm.py:261] Added request cmpl-ddc756cadf5b4071b3eaaaf917d14f1d-0.
INFO 03-02 01:10:18 [logger.py:42] Received request cmpl-c16806d05a5647aca76ef723585228be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:18 [async_llm.py:261] Added request cmpl-c16806d05a5647aca76ef723585228be-0.
INFO 03-02 01:10:19 [logger.py:42] Received request cmpl-6baa9e4c4e1649a6bcf8bec5f54af2f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:19 [async_llm.py:261] Added request cmpl-6baa9e4c4e1649a6bcf8bec5f54af2f0-0.
INFO 03-02 01:10:20 [logger.py:42] Received request cmpl-a90b83a01f0d4b27b25ecbec12ec707f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:20 [async_llm.py:261] Added request cmpl-a90b83a01f0d4b27b25ecbec12ec707f-0.
INFO 03-02 01:10:21 [logger.py:42] Received request cmpl-54bf26d907d04bb2baf2119ba7d1e50e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:21 [async_llm.py:261] Added request cmpl-54bf26d907d04bb2baf2119ba7d1e50e-0.
INFO 03-02 01:10:22 [logger.py:42] Received request cmpl-0590eef12f0e471eaf98f50157229d29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:22 [async_llm.py:261] Added request cmpl-0590eef12f0e471eaf98f50157229d29-0.
INFO 03-02 01:10:24 [logger.py:42] Received request cmpl-8afa78d94d0a4629a7acb1045ba8683e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:24 [async_llm.py:261] Added request cmpl-8afa78d94d0a4629a7acb1045ba8683e-0.
INFO 03-02 01:10:25 [logger.py:42] Received request cmpl-b3592faac8524b6e87748b3db41f39df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:25 [async_llm.py:261] Added request cmpl-b3592faac8524b6e87748b3db41f39df-0.
INFO 03-02 01:10:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:26 [logger.py:42] Received request cmpl-20b5d6eb52424e0f878ee1b614e0f2d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:26 [async_llm.py:261] Added request cmpl-20b5d6eb52424e0f878ee1b614e0f2d3-0.
INFO 03-02 01:10:27 [logger.py:42] Received request cmpl-a0ac63af181546b9aa5081d3cf1864d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:27 [async_llm.py:261] Added request cmpl-a0ac63af181546b9aa5081d3cf1864d1-0.
INFO 03-02 01:10:28 [logger.py:42] Received request cmpl-b9553ed1382b42b58cbbae73d46c73f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:28 [async_llm.py:261] Added request cmpl-b9553ed1382b42b58cbbae73d46c73f8-0.
INFO 03-02 01:10:29 [logger.py:42] Received request cmpl-1a1d46bee18d4399b33bb0f26d0b45f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:29 [async_llm.py:261] Added request cmpl-1a1d46bee18d4399b33bb0f26d0b45f8-0.
INFO 03-02 01:10:30 [logger.py:42] Received request cmpl-13ef7f0f02f24c2eb644dcb27ce5b80f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:30 [async_llm.py:261] Added request cmpl-13ef7f0f02f24c2eb644dcb27ce5b80f-0.
INFO 03-02 01:10:31 [logger.py:42] Received request cmpl-3ecc821f0c344ce8836539b354a0f6d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:31 [async_llm.py:261] Added request cmpl-3ecc821f0c344ce8836539b354a0f6d4-0.
INFO 03-02 01:10:32 [logger.py:42] Received request cmpl-29cfe99882f84fb99fb402fc360ff600-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:32 [async_llm.py:261] Added request cmpl-29cfe99882f84fb99fb402fc360ff600-0.
INFO 03-02 01:10:33 [logger.py:42] Received request cmpl-7d6b3b35b4ce45ada0641ce99a3dec4e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:33 [async_llm.py:261] Added request cmpl-7d6b3b35b4ce45ada0641ce99a3dec4e-0.
INFO 03-02 01:10:35 [logger.py:42] Received request cmpl-629b1189fccb40bc8bcee5d2e0a94a79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:35 [async_llm.py:261] Added request cmpl-629b1189fccb40bc8bcee5d2e0a94a79-0.
INFO 03-02 01:10:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:36 [logger.py:42] Received request cmpl-cf747eb709da438ebbcac396170783aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:36 [async_llm.py:261] Added request cmpl-cf747eb709da438ebbcac396170783aa-0.
INFO 03-02 01:10:37 [logger.py:42] Received request cmpl-90109457eae346edafa129496c597125-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:37 [async_llm.py:261] Added request cmpl-90109457eae346edafa129496c597125-0.
INFO 03-02 01:10:38 [logger.py:42] Received request cmpl-84e34bb68abc458288f6acce5aeb8441-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:38 [async_llm.py:261] Added request cmpl-84e34bb68abc458288f6acce5aeb8441-0.
INFO 03-02 01:10:39 [logger.py:42] Received request cmpl-bef3e62de25e4476bbbde4c12980ca28-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:39 [async_llm.py:261] Added request cmpl-bef3e62de25e4476bbbde4c12980ca28-0.
INFO 03-02 01:10:40 [logger.py:42] Received request cmpl-3a6ead14674540b99cfabb81a86d6f38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:40 [async_llm.py:261] Added request cmpl-3a6ead14674540b99cfabb81a86d6f38-0.
INFO 03-02 01:10:41 [logger.py:42] Received request cmpl-e8482770078a4e2a98270e5c260a07e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:41 [async_llm.py:261] Added request cmpl-e8482770078a4e2a98270e5c260a07e3-0.
INFO 03-02 01:10:42 [logger.py:42] Received request cmpl-40c8d729fcb548b7aa3182672abb09ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:42 [async_llm.py:261] Added request cmpl-40c8d729fcb548b7aa3182672abb09ad-0.
INFO 03-02 01:10:43 [logger.py:42] Received request cmpl-0209ef01631f4e98bb99c42c63f443b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:43 [async_llm.py:261] Added request cmpl-0209ef01631f4e98bb99c42c63f443b0-0.
INFO 03-02 01:10:44 [logger.py:42] Received request cmpl-53078719850e4331af9dfc4223e7b68d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:44 [async_llm.py:261] Added request cmpl-53078719850e4331af9dfc4223e7b68d-0.
INFO 03-02 01:10:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:45 [logger.py:42] Received request cmpl-de22dbbbf55c4360a7cfb1e7861ae9b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:45 [async_llm.py:261] Added request cmpl-de22dbbbf55c4360a7cfb1e7861ae9b4-0.
INFO 03-02 01:10:47 [logger.py:42] Received request cmpl-f4c6f2a888c347fe817f73ed40351202-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:47 [async_llm.py:261] Added request cmpl-f4c6f2a888c347fe817f73ed40351202-0.
INFO 03-02 01:10:48 [logger.py:42] Received request cmpl-312a1f725d244ac498272beefb564eae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:48 [async_llm.py:261] Added request cmpl-312a1f725d244ac498272beefb564eae-0.
INFO 03-02 01:10:49 [logger.py:42] Received request cmpl-0c0a7400c7d54214bd27f5405b3b6666-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:49 [async_llm.py:261] Added request cmpl-0c0a7400c7d54214bd27f5405b3b6666-0.
INFO 03-02 01:10:50 [logger.py:42] Received request cmpl-7ed9f2469374489ca8d8bdf9b4a4494c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:50 [async_llm.py:261] Added request cmpl-7ed9f2469374489ca8d8bdf9b4a4494c-0.
INFO 03-02 01:10:51 [logger.py:42] Received request cmpl-9c24829baee1466fbc79cf691443accd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:51 [async_llm.py:261] Added request cmpl-9c24829baee1466fbc79cf691443accd-0.
INFO 03-02 01:10:52 [logger.py:42] Received request cmpl-c628f7513191431bba17b2bf5c066c78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:52 [async_llm.py:261] Added request cmpl-c628f7513191431bba17b2bf5c066c78-0.
INFO 03-02 01:10:53 [logger.py:42] Received request cmpl-3fa9ceebca674d6b9fe62729433cf4ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:53 [async_llm.py:261] Added request cmpl-3fa9ceebca674d6b9fe62729433cf4ef-0.
INFO 03-02 01:10:54 [logger.py:42] Received request cmpl-8de21f150c714ba081501027557f3999-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:54 [async_llm.py:261] Added request cmpl-8de21f150c714ba081501027557f3999-0.
INFO 03-02 01:10:55 [logger.py:42] Received request cmpl-ecb314e77e924c059c06869ca951636f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:55 [async_llm.py:261] Added request cmpl-ecb314e77e924c059c06869ca951636f-0.
INFO 03-02 01:10:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:10:56 [logger.py:42] Received request cmpl-42f5e9b9394e4589b97d69d6342d20c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:56 [async_llm.py:261] Added request cmpl-42f5e9b9394e4589b97d69d6342d20c4-0.
INFO 03-02 01:10:58 [logger.py:42] Received request cmpl-a7d014ac95624537a793581ddbde431d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:58 [async_llm.py:261] Added request cmpl-a7d014ac95624537a793581ddbde431d-0.
INFO 03-02 01:10:59 [logger.py:42] Received request cmpl-95be51df0cc04ff2b1516e66a8e72043-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:10:59 [async_llm.py:261] Added request cmpl-95be51df0cc04ff2b1516e66a8e72043-0.
INFO 03-02 01:11:00 [logger.py:42] Received request cmpl-d6c9008675504e14897023d0fcf9ab95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:00 [async_llm.py:261] Added request cmpl-d6c9008675504e14897023d0fcf9ab95-0.
INFO 03-02 01:11:01 [logger.py:42] Received request cmpl-8f3448f3baa642bd8cdc5dd7e12d1b3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:01 [async_llm.py:261] Added request cmpl-8f3448f3baa642bd8cdc5dd7e12d1b3d-0.
INFO 03-02 01:11:02 [logger.py:42] Received request cmpl-bc7731154cbc4a1483f1de3b6e64e3c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:02 [async_llm.py:261] Added request cmpl-bc7731154cbc4a1483f1de3b6e64e3c9-0.
INFO 03-02 01:11:03 [logger.py:42] Received request cmpl-63e76f5aabc64c8fb7be75bb1d4d64ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:03 [async_llm.py:261] Added request cmpl-63e76f5aabc64c8fb7be75bb1d4d64ae-0.
INFO 03-02 01:11:04 [logger.py:42] Received request cmpl-c96a8e01c43f4fe99d23c62a0085e306-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:04 [async_llm.py:261] Added request cmpl-c96a8e01c43f4fe99d23c62a0085e306-0.
INFO 03-02 01:11:05 [logger.py:42] Received request cmpl-bc4aa781d464456ab8a526da6c91e421-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:05 [async_llm.py:261] Added request cmpl-bc4aa781d464456ab8a526da6c91e421-0.
INFO 03-02 01:11:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:06 [logger.py:42] Received request cmpl-12230ce436c849d58c891398ad01bb04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:06 [async_llm.py:261] Added request cmpl-12230ce436c849d58c891398ad01bb04-0.
INFO 03-02 01:11:07 [logger.py:42] Received request cmpl-1899b4b6b9c541de8995b9cc78f13c17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:07 [async_llm.py:261] Added request cmpl-1899b4b6b9c541de8995b9cc78f13c17-0.
INFO 03-02 01:11:08 [logger.py:42] Received request cmpl-b82b6d365703465cb00a763c1fc19ddc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:08 [async_llm.py:261] Added request cmpl-b82b6d365703465cb00a763c1fc19ddc-0.
INFO 03-02 01:11:10 [logger.py:42] Received request cmpl-cac55c94a07d4c3e8a7dff6c77e2cfc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:10 [async_llm.py:261] Added request cmpl-cac55c94a07d4c3e8a7dff6c77e2cfc9-0.
INFO 03-02 01:11:11 [logger.py:42] Received request cmpl-8bc3658936a34762bcd18beb86a7f939-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:11 [async_llm.py:261] Added request cmpl-8bc3658936a34762bcd18beb86a7f939-0.
INFO 03-02 01:11:12 [logger.py:42] Received request cmpl-510d7e909c3e425e9bfd6281ad7eb9d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:12 [async_llm.py:261] Added request cmpl-510d7e909c3e425e9bfd6281ad7eb9d2-0.
INFO 03-02 01:11:13 [logger.py:42] Received request cmpl-1cea7e4959444769b5053464ea5fd049-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:13 [async_llm.py:261] Added request cmpl-1cea7e4959444769b5053464ea5fd049-0.
INFO 03-02 01:11:14 [logger.py:42] Received request cmpl-cef32c85ef6146ecbc3593c8d66e4774-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:14 [async_llm.py:261] Added request cmpl-cef32c85ef6146ecbc3593c8d66e4774-0.
INFO 03-02 01:11:15 [logger.py:42] Received request cmpl-88fe34aeb4274efdb1b993003c9685cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:15 [async_llm.py:261] Added request cmpl-88fe34aeb4274efdb1b993003c9685cf-0.
INFO 03-02 01:11:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:16 [logger.py:42] Received request cmpl-e194247e27ae411783809cccb887e25d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:16 [async_llm.py:261] Added request cmpl-e194247e27ae411783809cccb887e25d-0.
INFO 03-02 01:11:17 [logger.py:42] Received request cmpl-dd09b772ec29470c907506e80077d8f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:17 [async_llm.py:261] Added request cmpl-dd09b772ec29470c907506e80077d8f6-0.
INFO 03-02 01:11:18 [logger.py:42] Received request cmpl-dfebf39e8e3940eba52873d18188b4b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:18 [async_llm.py:261] Added request cmpl-dfebf39e8e3940eba52873d18188b4b5-0.
INFO 03-02 01:11:19 [logger.py:42] Received request cmpl-77e09757eec44446bf1295cc26d001bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:19 [async_llm.py:261] Added request cmpl-77e09757eec44446bf1295cc26d001bd-0.
INFO 03-02 01:11:20 [logger.py:42] Received request cmpl-2ed50a6806ed40c2a47bedfebff699f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:20 [async_llm.py:261] Added request cmpl-2ed50a6806ed40c2a47bedfebff699f6-0.
INFO 03-02 01:11:22 [logger.py:42] Received request cmpl-ba90be6a2bf74bc2af98618815ba10f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:22 [async_llm.py:261] Added request cmpl-ba90be6a2bf74bc2af98618815ba10f2-0.
INFO 03-02 01:11:23 [logger.py:42] Received request cmpl-9f3345b5b26045d6b81ffe07e88512c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:23 [async_llm.py:261] Added request cmpl-9f3345b5b26045d6b81ffe07e88512c6-0.
INFO 03-02 01:11:24 [logger.py:42] Received request cmpl-657853e585254c87b2c6569bd9c48964-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:24 [async_llm.py:261] Added request cmpl-657853e585254c87b2c6569bd9c48964-0.
INFO 03-02 01:11:25 [logger.py:42] Received request cmpl-e752b0edc68d4bed9d66c59b2ff4402c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:25 [async_llm.py:261] Added request cmpl-e752b0edc68d4bed9d66c59b2ff4402c-0.
INFO 03-02 01:11:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:26 [logger.py:42] Received request cmpl-322d57a4ba6a4feb9c76b0475ef25891-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:26 [async_llm.py:261] Added request cmpl-322d57a4ba6a4feb9c76b0475ef25891-0.
INFO 03-02 01:11:27 [logger.py:42] Received request cmpl-15e5332938b0454aa507f47db097af30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:27 [async_llm.py:261] Added request cmpl-15e5332938b0454aa507f47db097af30-0.
INFO 03-02 01:11:28 [logger.py:42] Received request cmpl-36dcc0ecaf174588bed60b1cd347ce54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:28 [async_llm.py:261] Added request cmpl-36dcc0ecaf174588bed60b1cd347ce54-0.
INFO 03-02 01:11:29 [logger.py:42] Received request cmpl-26c7bb8c67504adda78570675851bc2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:29 [async_llm.py:261] Added request cmpl-26c7bb8c67504adda78570675851bc2c-0.
INFO 03-02 01:11:30 [logger.py:42] Received request cmpl-4feac8ea030f4026b020215d9d7efc3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:30 [async_llm.py:261] Added request cmpl-4feac8ea030f4026b020215d9d7efc3c-0.
INFO 03-02 01:11:31 [logger.py:42] Received request cmpl-a1f88248ec0f4d11b1ab80c615dafdae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:31 [async_llm.py:261] Added request cmpl-a1f88248ec0f4d11b1ab80c615dafdae-0.
INFO 03-02 01:11:33 [logger.py:42] Received request cmpl-cb4d026f416e4c22aed5756445326478-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:33 [async_llm.py:261] Added request cmpl-cb4d026f416e4c22aed5756445326478-0.
INFO 03-02 01:11:34 [logger.py:42] Received request cmpl-0c59d5348cf44f0abce05d19470e1c30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:34 [async_llm.py:261] Added request cmpl-0c59d5348cf44f0abce05d19470e1c30-0.
INFO 03-02 01:11:35 [logger.py:42] Received request cmpl-6db33871aebf41d099c5c6c08239d75f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:35 [async_llm.py:261] Added request cmpl-6db33871aebf41d099c5c6c08239d75f-0.
INFO 03-02 01:11:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:36 [logger.py:42] Received request cmpl-047bd42634964447a2989de20336cb07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:36 [async_llm.py:261] Added request cmpl-047bd42634964447a2989de20336cb07-0.
INFO 03-02 01:11:37 [logger.py:42] Received request cmpl-e6d814e7d20445069a159f0778067f65-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:37 [async_llm.py:261] Added request cmpl-e6d814e7d20445069a159f0778067f65-0.
INFO 03-02 01:11:38 [logger.py:42] Received request cmpl-fcf740c8f36244da84679a87be3b723e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:38 [async_llm.py:261] Added request cmpl-fcf740c8f36244da84679a87be3b723e-0.
INFO 03-02 01:11:39 [logger.py:42] Received request cmpl-33bbf887e06f4e0b9f09b6a71d88aefe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:39 [async_llm.py:261] Added request cmpl-33bbf887e06f4e0b9f09b6a71d88aefe-0.
INFO 03-02 01:11:40 [logger.py:42] Received request cmpl-ad4983bfcf5b4548a104fd7d4244a2df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:40 [async_llm.py:261] Added request cmpl-ad4983bfcf5b4548a104fd7d4244a2df-0.
INFO 03-02 01:11:41 [logger.py:42] Received request cmpl-2c9dcb23f56c4f1bba06f4ba6a4beb24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:41 [async_llm.py:261] Added request cmpl-2c9dcb23f56c4f1bba06f4ba6a4beb24-0.
INFO 03-02 01:11:42 [logger.py:42] Received request cmpl-3224ed92918144a9858f874f0088e41e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:42 [async_llm.py:261] Added request cmpl-3224ed92918144a9858f874f0088e41e-0.
INFO 03-02 01:11:43 [logger.py:42] Received request cmpl-7e01c86ffd284e238c71839a4677413a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:43 [async_llm.py:261] Added request cmpl-7e01c86ffd284e238c71839a4677413a-0.
INFO 03-02 01:11:45 [logger.py:42] Received request cmpl-8b07f703c38f4c279ec02a0ef713dbb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:45 [async_llm.py:261] Added request cmpl-8b07f703c38f4c279ec02a0ef713dbb2-0.
INFO 03-02 01:11:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:46 [logger.py:42] Received request cmpl-052aa219a687461fa9f3046752b01eb8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:46 [async_llm.py:261] Added request cmpl-052aa219a687461fa9f3046752b01eb8-0.
INFO 03-02 01:11:47 [logger.py:42] Received request cmpl-8c80bab205f24461893a7ddc81714b63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:47 [async_llm.py:261] Added request cmpl-8c80bab205f24461893a7ddc81714b63-0.
INFO 03-02 01:11:48 [logger.py:42] Received request cmpl-a53141e89e0b45b8b175eda6f7519ffd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:48 [async_llm.py:261] Added request cmpl-a53141e89e0b45b8b175eda6f7519ffd-0.
INFO 03-02 01:11:49 [logger.py:42] Received request cmpl-409e84008c6343aab37cc0f1a28d9d01-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:49 [async_llm.py:261] Added request cmpl-409e84008c6343aab37cc0f1a28d9d01-0.
INFO 03-02 01:11:50 [logger.py:42] Received request cmpl-e9799429d1f2490a814b98b63f4c85e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:50 [async_llm.py:261] Added request cmpl-e9799429d1f2490a814b98b63f4c85e8-0.
INFO 03-02 01:11:51 [logger.py:42] Received request cmpl-59c9df8402414e5cb07b9654d8423af0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:51 [async_llm.py:261] Added request cmpl-59c9df8402414e5cb07b9654d8423af0-0.
INFO 03-02 01:11:52 [logger.py:42] Received request cmpl-79b07e1d816b4f89913ae681adbdc3c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:52 [async_llm.py:261] Added request cmpl-79b07e1d816b4f89913ae681adbdc3c6-0.
INFO 03-02 01:11:53 [logger.py:42] Received request cmpl-818461c8ed234f6ba50069a5a03c0347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:53 [async_llm.py:261] Added request cmpl-818461c8ed234f6ba50069a5a03c0347-0.
INFO 03-02 01:11:54 [logger.py:42] Received request cmpl-f156aa58ff3d4faea86df3d503fdab17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:54 [async_llm.py:261] Added request cmpl-f156aa58ff3d4faea86df3d503fdab17-0.
INFO 03-02 01:11:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:11:56 [logger.py:42] Received request cmpl-a2aa3334ba2b4e2b876810e8707278d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:56 [async_llm.py:261] Added request cmpl-a2aa3334ba2b4e2b876810e8707278d2-0.
INFO 03-02 01:11:57 [logger.py:42] Received request cmpl-d6d0d28cd01c4184b9e66ac9b606d56d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:57 [async_llm.py:261] Added request cmpl-d6d0d28cd01c4184b9e66ac9b606d56d-0.
INFO 03-02 01:11:58 [logger.py:42] Received request cmpl-59b0324fbe6b428ead817e55a50c5bfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:58 [async_llm.py:261] Added request cmpl-59b0324fbe6b428ead817e55a50c5bfe-0.
INFO 03-02 01:11:59 [logger.py:42] Received request cmpl-4e43e71d6b1a42db9ee65b23032f64f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:11:59 [async_llm.py:261] Added request cmpl-4e43e71d6b1a42db9ee65b23032f64f3-0.
INFO 03-02 01:12:00 [logger.py:42] Received request cmpl-73537dbcff2a48e1ac9341d319e4799c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:00 [async_llm.py:261] Added request cmpl-73537dbcff2a48e1ac9341d319e4799c-0.
INFO 03-02 01:12:01 [logger.py:42] Received request cmpl-9e4dd6c67829473e93645a926dd3a8c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:01 [async_llm.py:261] Added request cmpl-9e4dd6c67829473e93645a926dd3a8c6-0.
INFO 03-02 01:12:02 [logger.py:42] Received request cmpl-bd431aa1864f4cbfa7e673d30bdad49f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:02 [async_llm.py:261] Added request cmpl-bd431aa1864f4cbfa7e673d30bdad49f-0.
INFO 03-02 01:12:03 [logger.py:42] Received request cmpl-57200932ec4642d9aae6709582b2aaf1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:03 [async_llm.py:261] Added request cmpl-57200932ec4642d9aae6709582b2aaf1-0.
INFO 03-02 01:12:04 [logger.py:42] Received request cmpl-6b15bea774e54ac78dc0a6fb6d61eeae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:04 [async_llm.py:261] Added request cmpl-6b15bea774e54ac78dc0a6fb6d61eeae-0.
INFO 03-02 01:12:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:05 [logger.py:42] Received request cmpl-6a2159b632ec4914ada1a760aa11d628-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:05 [async_llm.py:261] Added request cmpl-6a2159b632ec4914ada1a760aa11d628-0.
INFO 03-02 01:12:07 [logger.py:42] Received request cmpl-613a673fb0db4ba6815922f54871631f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:07 [async_llm.py:261] Added request cmpl-613a673fb0db4ba6815922f54871631f-0.
INFO 03-02 01:12:08 [logger.py:42] Received request cmpl-f7ced7f1083446fabdcb25f428cb8730-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:08 [async_llm.py:261] Added request cmpl-f7ced7f1083446fabdcb25f428cb8730-0.
INFO 03-02 01:12:09 [logger.py:42] Received request cmpl-4f0c095bc5724ee5aef0e0863c4382c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:09 [async_llm.py:261] Added request cmpl-4f0c095bc5724ee5aef0e0863c4382c5-0.
INFO 03-02 01:12:10 [logger.py:42] Received request cmpl-855e5002b9c14297bf5c421a14ed0f74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:10 [async_llm.py:261] Added request cmpl-855e5002b9c14297bf5c421a14ed0f74-0.
INFO 03-02 01:12:11 [logger.py:42] Received request cmpl-36750cc7346e44ea8787ca012e5356a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:11 [async_llm.py:261] Added request cmpl-36750cc7346e44ea8787ca012e5356a1-0.
INFO 03-02 01:12:12 [logger.py:42] Received request cmpl-d23f15f1832948f7845dc4c11747a63f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:12 [async_llm.py:261] Added request cmpl-d23f15f1832948f7845dc4c11747a63f-0.
INFO 03-02 01:12:13 [logger.py:42] Received request cmpl-f60e3f202b214ab2b37cc997c0561565-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:13 [async_llm.py:261] Added request cmpl-f60e3f202b214ab2b37cc997c0561565-0.
INFO 03-02 01:12:14 [logger.py:42] Received request cmpl-56d9cb9494c64eb5ae344bbf628bf251-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:14 [async_llm.py:261] Added request cmpl-56d9cb9494c64eb5ae344bbf628bf251-0.
INFO 03-02 01:12:15 [logger.py:42] Received request cmpl-3f2cf111f96147f29bb6303a03e08905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:15 [async_llm.py:261] Added request cmpl-3f2cf111f96147f29bb6303a03e08905-0.
INFO 03-02 01:12:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:16 [logger.py:42] Received request cmpl-a1ce7391144a48e0a7a3d326ef15e985-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:16 [async_llm.py:261] Added request cmpl-a1ce7391144a48e0a7a3d326ef15e985-0.
INFO 03-02 01:12:17 [logger.py:42] Received request cmpl-b26edf0209c74963af80873b525c57cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:17 [async_llm.py:261] Added request cmpl-b26edf0209c74963af80873b525c57cb-0.
INFO 03-02 01:12:19 [logger.py:42] Received request cmpl-318abb03dead4276815d360e5424086e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:19 [async_llm.py:261] Added request cmpl-318abb03dead4276815d360e5424086e-0.
INFO 03-02 01:12:20 [logger.py:42] Received request cmpl-1811506df2ed4208b33327fd6a2b2002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:20 [async_llm.py:261] Added request cmpl-1811506df2ed4208b33327fd6a2b2002-0.
INFO 03-02 01:12:21 [logger.py:42] Received request cmpl-33ae42653082418299543deb55bd213e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:21 [async_llm.py:261] Added request cmpl-33ae42653082418299543deb55bd213e-0.
INFO 03-02 01:12:22 [logger.py:42] Received request cmpl-ef49230e1b3042648a48974a2b38f4b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:22 [async_llm.py:261] Added request cmpl-ef49230e1b3042648a48974a2b38f4b1-0.
INFO 03-02 01:12:23 [logger.py:42] Received request cmpl-c05855b0ce334e58b96d342e80fba256-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:23 [async_llm.py:261] Added request cmpl-c05855b0ce334e58b96d342e80fba256-0.
INFO 03-02 01:12:24 [logger.py:42] Received request cmpl-cc7a0f5045b44aa6bf5383716c58a97d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:24 [async_llm.py:261] Added request cmpl-cc7a0f5045b44aa6bf5383716c58a97d-0.
INFO 03-02 01:12:25 [logger.py:42] Received request cmpl-9dd9c786aff845f8a306e1c6e6f3f772-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:25 [async_llm.py:261] Added request cmpl-9dd9c786aff845f8a306e1c6e6f3f772-0.
INFO 03-02 01:12:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:26 [logger.py:42] Received request cmpl-47dc9c32ae524a8fb97198331d015583-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:26 [async_llm.py:261] Added request cmpl-47dc9c32ae524a8fb97198331d015583-0.
INFO 03-02 01:12:27 [logger.py:42] Received request cmpl-5e1543db5da34b038f010c37e9f43730-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:27 [async_llm.py:261] Added request cmpl-5e1543db5da34b038f010c37e9f43730-0.
INFO 03-02 01:12:28 [logger.py:42] Received request cmpl-a86a92ee42e14cdc999e6af2e9d36a45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:28 [async_llm.py:261] Added request cmpl-a86a92ee42e14cdc999e6af2e9d36a45-0.
INFO 03-02 01:12:30 [logger.py:42] Received request cmpl-199bbf5a524a425a8f75cbed66d96856-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:30 [async_llm.py:261] Added request cmpl-199bbf5a524a425a8f75cbed66d96856-0.
INFO 03-02 01:12:31 [logger.py:42] Received request cmpl-bf77331d75364fb4b70ac71934c6477b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:31 [async_llm.py:261] Added request cmpl-bf77331d75364fb4b70ac71934c6477b-0.
INFO 03-02 01:12:32 [logger.py:42] Received request cmpl-a13f6db1ed5d4cc4aa89f6261fe6ef2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:32 [async_llm.py:261] Added request cmpl-a13f6db1ed5d4cc4aa89f6261fe6ef2f-0.
INFO 03-02 01:12:33 [logger.py:42] Received request cmpl-2c7622cf5a2641c6a1450a98b3273766-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:33 [async_llm.py:261] Added request cmpl-2c7622cf5a2641c6a1450a98b3273766-0.
INFO 03-02 01:12:34 [logger.py:42] Received request cmpl-ffffca36dc85427ab9348f499964d80b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:34 [async_llm.py:261] Added request cmpl-ffffca36dc85427ab9348f499964d80b-0.
INFO 03-02 01:12:35 [logger.py:42] Received request cmpl-5987a04ce4b04be5bc3fce2fae8730b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:35 [async_llm.py:261] Added request cmpl-5987a04ce4b04be5bc3fce2fae8730b2-0.
INFO 03-02 01:12:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:36 [logger.py:42] Received request cmpl-b91795a73c8e481799d49f597de2d0b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:36 [async_llm.py:261] Added request cmpl-b91795a73c8e481799d49f597de2d0b5-0.
INFO 03-02 01:12:37 [logger.py:42] Received request cmpl-7170b219b14c4455be3a548bb9f99df7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:37 [async_llm.py:261] Added request cmpl-7170b219b14c4455be3a548bb9f99df7-0.
INFO 03-02 01:12:38 [logger.py:42] Received request cmpl-08ea073ed9864fd6b0b5f83dbd24034d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:38 [async_llm.py:261] Added request cmpl-08ea073ed9864fd6b0b5f83dbd24034d-0.
INFO 03-02 01:12:39 [logger.py:42] Received request cmpl-0726138fc7bd42ddb1be6cfc1e569533-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:39 [async_llm.py:261] Added request cmpl-0726138fc7bd42ddb1be6cfc1e569533-0.
INFO 03-02 01:12:41 [logger.py:42] Received request cmpl-ac2fc0c136f7403b9feb384caa928b2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:41 [async_llm.py:261] Added request cmpl-ac2fc0c136f7403b9feb384caa928b2a-0.
INFO 03-02 01:12:42 [logger.py:42] Received request cmpl-ad644f06a9054c0fb6390ac4f88bb02c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:42 [async_llm.py:261] Added request cmpl-ad644f06a9054c0fb6390ac4f88bb02c-0.
INFO 03-02 01:12:43 [logger.py:42] Received request cmpl-e0c0023209e24c19b3fe0ac84cea6fee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:43 [async_llm.py:261] Added request cmpl-e0c0023209e24c19b3fe0ac84cea6fee-0.
INFO 03-02 01:12:44 [logger.py:42] Received request cmpl-a7ba0b95b1b240b2a7a7504fcfe03342-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:44 [async_llm.py:261] Added request cmpl-a7ba0b95b1b240b2a7a7504fcfe03342-0.
INFO 03-02 01:12:45 [logger.py:42] Received request cmpl-d30f17766b0c4ec0b9e4eb4d8cbfa0d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:45 [async_llm.py:261] Added request cmpl-d30f17766b0c4ec0b9e4eb4d8cbfa0d9-0.
INFO 03-02 01:12:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:46 [logger.py:42] Received request cmpl-549e9aae53ec4d50ab9e1782582b1516-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:46 [async_llm.py:261] Added request cmpl-549e9aae53ec4d50ab9e1782582b1516-0.
INFO 03-02 01:12:47 [logger.py:42] Received request cmpl-97a5f73c1bde42a79efcc6e00662e16e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:47 [async_llm.py:261] Added request cmpl-97a5f73c1bde42a79efcc6e00662e16e-0.
INFO 03-02 01:12:48 [logger.py:42] Received request cmpl-9ad78942bb3f4ade9cc7d64399af7130-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:48 [async_llm.py:261] Added request cmpl-9ad78942bb3f4ade9cc7d64399af7130-0.
INFO 03-02 01:12:49 [logger.py:42] Received request cmpl-1b6c591d0f9c4835b64ebdee9bc6043d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:49 [async_llm.py:261] Added request cmpl-1b6c591d0f9c4835b64ebdee9bc6043d-0.
INFO 03-02 01:12:50 [logger.py:42] Received request cmpl-cf58c1e26870431a987d763a072a763c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:50 [async_llm.py:261] Added request cmpl-cf58c1e26870431a987d763a072a763c-0.
INFO 03-02 01:12:51 [logger.py:42] Received request cmpl-961bcf55e53b4ba18108eeb042678d93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:51 [async_llm.py:261] Added request cmpl-961bcf55e53b4ba18108eeb042678d93-0.
INFO 03-02 01:12:53 [logger.py:42] Received request cmpl-f9c73c43f94449509a492a83c1b03c34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:53 [async_llm.py:261] Added request cmpl-f9c73c43f94449509a492a83c1b03c34-0.
INFO 03-02 01:12:54 [logger.py:42] Received request cmpl-ec563997d4b54bf9b31e66da21505935-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:54 [async_llm.py:261] Added request cmpl-ec563997d4b54bf9b31e66da21505935-0.
INFO 03-02 01:12:55 [logger.py:42] Received request cmpl-916c068573eb46cfa8843bd24f8f1025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:55 [async_llm.py:261] Added request cmpl-916c068573eb46cfa8843bd24f8f1025-0.
INFO 03-02 01:12:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:12:56 [logger.py:42] Received request cmpl-e5297899fb0b4b0f86da035ea74620a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:56 [async_llm.py:261] Added request cmpl-e5297899fb0b4b0f86da035ea74620a8-0.
INFO 03-02 01:12:57 [logger.py:42] Received request cmpl-cbcfd61fdc5447b38d46b948df989651-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:57 [async_llm.py:261] Added request cmpl-cbcfd61fdc5447b38d46b948df989651-0.
INFO 03-02 01:12:58 [logger.py:42] Received request cmpl-e0a670780b174a6eaca4bc9e74b5ed1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:58 [async_llm.py:261] Added request cmpl-e0a670780b174a6eaca4bc9e74b5ed1a-0.
INFO 03-02 01:12:59 [logger.py:42] Received request cmpl-aa8c7568c6aa47e1a6f600f8929cff29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:12:59 [async_llm.py:261] Added request cmpl-aa8c7568c6aa47e1a6f600f8929cff29-0.
INFO 03-02 01:13:00 [logger.py:42] Received request cmpl-3c6b078fee144f44a149facd5f63f9cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:00 [async_llm.py:261] Added request cmpl-3c6b078fee144f44a149facd5f63f9cb-0.
INFO 03-02 01:13:01 [logger.py:42] Received request cmpl-93b0617943604e6f8122e2ff553d15c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:01 [async_llm.py:261] Added request cmpl-93b0617943604e6f8122e2ff553d15c0-0.
INFO 03-02 01:13:02 [logger.py:42] Received request cmpl-b3d9f62a171f42c09ab6195734474b09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:02 [async_llm.py:261] Added request cmpl-b3d9f62a171f42c09ab6195734474b09-0.
INFO 03-02 01:13:04 [logger.py:42] Received request cmpl-5d3f40047cfb4db5b9f9c0cfcad92a07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:04 [async_llm.py:261] Added request cmpl-5d3f40047cfb4db5b9f9c0cfcad92a07-0.
INFO 03-02 01:13:05 [logger.py:42] Received request cmpl-9f9729f4e03040abb0e1aa213777f4b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:05 [async_llm.py:261] Added request cmpl-9f9729f4e03040abb0e1aa213777f4b0-0.
INFO 03-02 01:13:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:06 [logger.py:42] Received request cmpl-107d744523654f82846d37ffff3827c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:06 [async_llm.py:261] Added request cmpl-107d744523654f82846d37ffff3827c5-0.
INFO 03-02 01:13:07 [logger.py:42] Received request cmpl-e393ea5d65294eac8be35a0d64984878-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:07 [async_llm.py:261] Added request cmpl-e393ea5d65294eac8be35a0d64984878-0.
INFO 03-02 01:13:08 [logger.py:42] Received request cmpl-f17794cc05114c4aa55abdaf2b44b913-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:08 [async_llm.py:261] Added request cmpl-f17794cc05114c4aa55abdaf2b44b913-0.
INFO 03-02 01:13:09 [logger.py:42] Received request cmpl-af7b669e5b764413a246533d42b0f220-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:09 [async_llm.py:261] Added request cmpl-af7b669e5b764413a246533d42b0f220-0.
INFO 03-02 01:13:10 [logger.py:42] Received request cmpl-5689b34f2f1f4d44b29f7d223bb1015c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:10 [async_llm.py:261] Added request cmpl-5689b34f2f1f4d44b29f7d223bb1015c-0.
INFO 03-02 01:13:11 [logger.py:42] Received request cmpl-55fa2e6f3c1f4c8b820b7d0e433eac59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:11 [async_llm.py:261] Added request cmpl-55fa2e6f3c1f4c8b820b7d0e433eac59-0.
INFO 03-02 01:13:12 [logger.py:42] Received request cmpl-d1a3c78ded3d4e60bbae6445641589f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:12 [async_llm.py:261] Added request cmpl-d1a3c78ded3d4e60bbae6445641589f4-0.
INFO 03-02 01:13:13 [logger.py:42] Received request cmpl-f63f072b7313460fab316520a968fd39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:13 [async_llm.py:261] Added request cmpl-f63f072b7313460fab316520a968fd39-0.
INFO 03-02 01:13:15 [logger.py:42] Received request cmpl-4d76265c40cf46e1ac14dec3574c7f7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:15 [async_llm.py:261] Added request cmpl-4d76265c40cf46e1ac14dec3574c7f7f-0.
INFO 03-02 01:13:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:16 [logger.py:42] Received request cmpl-973e0d06dee6486388d58fc858dae470-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:16 [async_llm.py:261] Added request cmpl-973e0d06dee6486388d58fc858dae470-0.
INFO 03-02 01:13:17 [logger.py:42] Received request cmpl-b6fe2f0591a5444cb0b3fa260c0d41e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:17 [async_llm.py:261] Added request cmpl-b6fe2f0591a5444cb0b3fa260c0d41e6-0.
INFO 03-02 01:13:18 [logger.py:42] Received request cmpl-68330229901c40cc8d6dcbac2eee977c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:18 [async_llm.py:261] Added request cmpl-68330229901c40cc8d6dcbac2eee977c-0.
INFO 03-02 01:13:19 [logger.py:42] Received request cmpl-981762781eb74abe9ad4025ae4a43fc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:19 [async_llm.py:261] Added request cmpl-981762781eb74abe9ad4025ae4a43fc1-0.
INFO 03-02 01:13:20 [logger.py:42] Received request cmpl-2942171641f84daab25d48275de9a216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:20 [async_llm.py:261] Added request cmpl-2942171641f84daab25d48275de9a216-0.
INFO 03-02 01:13:21 [logger.py:42] Received request cmpl-bc5f5be3e4f046498650794ce90ebebf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:21 [async_llm.py:261] Added request cmpl-bc5f5be3e4f046498650794ce90ebebf-0.
INFO 03-02 01:13:22 [logger.py:42] Received request cmpl-8ddcc7a8bc4c425bab3c8d1193824867-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:22 [async_llm.py:261] Added request cmpl-8ddcc7a8bc4c425bab3c8d1193824867-0.
INFO 03-02 01:13:23 [logger.py:42] Received request cmpl-62ad580ab490419187500b516049d197-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:23 [async_llm.py:261] Added request cmpl-62ad580ab490419187500b516049d197-0.
INFO 03-02 01:13:24 [logger.py:42] Received request cmpl-c1e5583bbf844991a031346deb9828fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:24 [async_llm.py:261] Added request cmpl-c1e5583bbf844991a031346deb9828fd-0.
INFO 03-02 01:13:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:25 [logger.py:42] Received request cmpl-fd0924006a0648e790081694b6a852f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:25 [async_llm.py:261] Added request cmpl-fd0924006a0648e790081694b6a852f2-0.
INFO 03-02 01:13:27 [logger.py:42] Received request cmpl-9fd1bf1891714af8a64aa82b7ea5d96c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:27 [async_llm.py:261] Added request cmpl-9fd1bf1891714af8a64aa82b7ea5d96c-0.
INFO 03-02 01:13:28 [logger.py:42] Received request cmpl-1d73e0eb224940e48cd50f3180eae593-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:28 [async_llm.py:261] Added request cmpl-1d73e0eb224940e48cd50f3180eae593-0.
INFO 03-02 01:13:29 [logger.py:42] Received request cmpl-d2bf92a5d6024640ba613f5b3e2d20f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:29 [async_llm.py:261] Added request cmpl-d2bf92a5d6024640ba613f5b3e2d20f3-0.
INFO 03-02 01:13:30 [logger.py:42] Received request cmpl-539fd15d618345d98658c74246948d53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:30 [async_llm.py:261] Added request cmpl-539fd15d618345d98658c74246948d53-0.
INFO 03-02 01:13:31 [logger.py:42] Received request cmpl-cb8985658df6439d91a7c86aada578e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:31 [async_llm.py:261] Added request cmpl-cb8985658df6439d91a7c86aada578e8-0.
INFO 03-02 01:13:32 [logger.py:42] Received request cmpl-7c530531d3ae48499bef75c70189aa14-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:32 [async_llm.py:261] Added request cmpl-7c530531d3ae48499bef75c70189aa14-0.
INFO 03-02 01:13:33 [logger.py:42] Received request cmpl-0daeeead329f471283eebec17dacfcc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:33 [async_llm.py:261] Added request cmpl-0daeeead329f471283eebec17dacfcc1-0.
INFO 03-02 01:13:34 [logger.py:42] Received request cmpl-0fb804b6e4034b7097071349a1862ccf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:34 [async_llm.py:261] Added request cmpl-0fb804b6e4034b7097071349a1862ccf-0.
INFO 03-02 01:13:35 [logger.py:42] Received request cmpl-3c8ca11bf2a840dc97eba209ed41fcb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:35 [async_llm.py:261] Added request cmpl-3c8ca11bf2a840dc97eba209ed41fcb2-0.
INFO 03-02 01:13:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:36 [logger.py:42] Received request cmpl-492029d394fd41dc881789cf19a66cae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:36 [async_llm.py:261] Added request cmpl-492029d394fd41dc881789cf19a66cae-0.
INFO 03-02 01:13:38 [logger.py:42] Received request cmpl-dbb4a89b9b7a401989ca0fb0977f0a8d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:38 [async_llm.py:261] Added request cmpl-dbb4a89b9b7a401989ca0fb0977f0a8d-0.
INFO 03-02 01:13:39 [logger.py:42] Received request cmpl-8efc8472aa84451e87d1f0b214515da1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:39 [async_llm.py:261] Added request cmpl-8efc8472aa84451e87d1f0b214515da1-0.
INFO 03-02 01:13:40 [logger.py:42] Received request cmpl-974406c5722540edabd19474448c6d3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:40 [async_llm.py:261] Added request cmpl-974406c5722540edabd19474448c6d3f-0.
INFO 03-02 01:13:41 [logger.py:42] Received request cmpl-d9158da6f1d94f5093aaca26751d56bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:41 [async_llm.py:261] Added request cmpl-d9158da6f1d94f5093aaca26751d56bd-0.
INFO 03-02 01:13:42 [logger.py:42] Received request cmpl-21df176212eb4741a851f418b3bcda07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:42 [async_llm.py:261] Added request cmpl-21df176212eb4741a851f418b3bcda07-0.
INFO 03-02 01:13:43 [logger.py:42] Received request cmpl-9cecd891b3e84ac5b82ff771da7c5473-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:43 [async_llm.py:261] Added request cmpl-9cecd891b3e84ac5b82ff771da7c5473-0.
INFO 03-02 01:13:44 [logger.py:42] Received request cmpl-328dbfd6936045ac8023061dc833ac5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:44 [async_llm.py:261] Added request cmpl-328dbfd6936045ac8023061dc833ac5e-0.
INFO 03-02 01:13:45 [logger.py:42] Received request cmpl-ad1e0d0eda2241e9aa7e9c3e70315cba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:45 [async_llm.py:261] Added request cmpl-ad1e0d0eda2241e9aa7e9c3e70315cba-0.
INFO 03-02 01:13:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:46 [logger.py:42] Received request cmpl-5473535739ed45d794d2d17d7791f3ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:46 [async_llm.py:261] Added request cmpl-5473535739ed45d794d2d17d7791f3ed-0.
INFO 03-02 01:13:47 [logger.py:42] Received request cmpl-5913562a1be0427292454c5d5f22d916-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:47 [async_llm.py:261] Added request cmpl-5913562a1be0427292454c5d5f22d916-0.
INFO 03-02 01:13:48 [logger.py:42] Received request cmpl-47867e804a5e49ddabafd5fff59b232c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:48 [async_llm.py:261] Added request cmpl-47867e804a5e49ddabafd5fff59b232c-0.
INFO 03-02 01:13:50 [logger.py:42] Received request cmpl-34a21fc3ec4f4700b05885da2eabd4d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:50 [async_llm.py:261] Added request cmpl-34a21fc3ec4f4700b05885da2eabd4d6-0.
INFO 03-02 01:13:51 [logger.py:42] Received request cmpl-c4a75572c3c14a5693fb8e71e99710f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:51 [async_llm.py:261] Added request cmpl-c4a75572c3c14a5693fb8e71e99710f0-0.
INFO 03-02 01:13:52 [logger.py:42] Received request cmpl-94d38b5fdde645f1a7c1d535c95a4c1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:52 [async_llm.py:261] Added request cmpl-94d38b5fdde645f1a7c1d535c95a4c1b-0.
INFO 03-02 01:13:53 [logger.py:42] Received request cmpl-79984cd343a8481b81aaa8d99ef03c88-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:53 [async_llm.py:261] Added request cmpl-79984cd343a8481b81aaa8d99ef03c88-0.
INFO 03-02 01:13:54 [logger.py:42] Received request cmpl-b06242be47d741058ed176a14e204d8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:54 [async_llm.py:261] Added request cmpl-b06242be47d741058ed176a14e204d8a-0.
INFO 03-02 01:13:55 [logger.py:42] Received request cmpl-9c67dfbc1347423084d622a9ad108e85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:55 [async_llm.py:261] Added request cmpl-9c67dfbc1347423084d622a9ad108e85-0.
INFO 03-02 01:13:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:13:56 [logger.py:42] Received request cmpl-141adcf52c7a47c1b8384cb60e9c9b32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:56 [async_llm.py:261] Added request cmpl-141adcf52c7a47c1b8384cb60e9c9b32-0.
INFO 03-02 01:13:57 [logger.py:42] Received request cmpl-e98fcb3741304fca8bb0d162e2b4c899-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:57 [async_llm.py:261] Added request cmpl-e98fcb3741304fca8bb0d162e2b4c899-0.
INFO 03-02 01:13:58 [logger.py:42] Received request cmpl-a7efe2c2aa5d4f94944df9cb92c1b8b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:58 [async_llm.py:261] Added request cmpl-a7efe2c2aa5d4f94944df9cb92c1b8b0-0.
INFO 03-02 01:13:59 [logger.py:42] Received request cmpl-0432c2d325154dcda1b084e74c072dbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:13:59 [async_llm.py:261] Added request cmpl-0432c2d325154dcda1b084e74c072dbe-0.
INFO 03-02 01:14:01 [logger.py:42] Received request cmpl-31fcf4c672394fcfb59e483137098b1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:01 [async_llm.py:261] Added request cmpl-31fcf4c672394fcfb59e483137098b1b-0.
INFO 03-02 01:14:02 [logger.py:42] Received request cmpl-21746910f5e644caa72f55be720f8e19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:02 [async_llm.py:261] Added request cmpl-21746910f5e644caa72f55be720f8e19-0.
INFO 03-02 01:14:03 [logger.py:42] Received request cmpl-0301fe78fc9f4d96a858d7cb766e0a68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:03 [async_llm.py:261] Added request cmpl-0301fe78fc9f4d96a858d7cb766e0a68-0.
INFO 03-02 01:14:04 [logger.py:42] Received request cmpl-d1230113627f4354826fae31349969bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:04 [async_llm.py:261] Added request cmpl-d1230113627f4354826fae31349969bc-0.
INFO 03-02 01:14:05 [logger.py:42] Received request cmpl-a8229bbe501842d5aebeedc49d1682c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:05 [async_llm.py:261] Added request cmpl-a8229bbe501842d5aebeedc49d1682c8-0.
INFO 03-02 01:14:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:06 [logger.py:42] Received request cmpl-550ea19699f54493b10edc06285f0f43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:06 [async_llm.py:261] Added request cmpl-550ea19699f54493b10edc06285f0f43-0.
INFO 03-02 01:14:07 [logger.py:42] Received request cmpl-ac8f85623e904b6f85d2b7fa98538512-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:07 [async_llm.py:261] Added request cmpl-ac8f85623e904b6f85d2b7fa98538512-0.
INFO 03-02 01:14:08 [logger.py:42] Received request cmpl-29d562bacff8478db91a74608554e9f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:08 [async_llm.py:261] Added request cmpl-29d562bacff8478db91a74608554e9f1-0.
INFO 03-02 01:14:09 [logger.py:42] Received request cmpl-7cfd0531852446fcbeb878626ed6804c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:09 [async_llm.py:261] Added request cmpl-7cfd0531852446fcbeb878626ed6804c-0.
INFO 03-02 01:14:10 [logger.py:42] Received request cmpl-9ad8f89ac82c45e5acf78e81791e74d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:10 [async_llm.py:261] Added request cmpl-9ad8f89ac82c45e5acf78e81791e74d0-0.
INFO 03-02 01:14:11 [logger.py:42] Received request cmpl-8c611af5efa942828b6e118651303874-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:11 [async_llm.py:261] Added request cmpl-8c611af5efa942828b6e118651303874-0.
INFO 03-02 01:14:13 [logger.py:42] Received request cmpl-ba84050398894862844c06fdf0444fe8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:13 [async_llm.py:261] Added request cmpl-ba84050398894862844c06fdf0444fe8-0.
INFO 03-02 01:14:14 [logger.py:42] Received request cmpl-d4d58bd83ce545529be7ea3d25162e63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:14 [async_llm.py:261] Added request cmpl-d4d58bd83ce545529be7ea3d25162e63-0.
INFO 03-02 01:14:15 [logger.py:42] Received request cmpl-b6c945b8b6f04191b5872169ee1533f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:15 [async_llm.py:261] Added request cmpl-b6c945b8b6f04191b5872169ee1533f6-0.
INFO 03-02 01:14:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:16 [logger.py:42] Received request cmpl-52e330bff3d9436a9797f4b3222bbcf4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:16 [async_llm.py:261] Added request cmpl-52e330bff3d9436a9797f4b3222bbcf4-0.
INFO 03-02 01:14:17 [logger.py:42] Received request cmpl-a49d875d83c041788e06a5f28f2bf0a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:17 [async_llm.py:261] Added request cmpl-a49d875d83c041788e06a5f28f2bf0a1-0.
INFO 03-02 01:14:18 [logger.py:42] Received request cmpl-f8269b72871a49bf9845607becfa119e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:18 [async_llm.py:261] Added request cmpl-f8269b72871a49bf9845607becfa119e-0.
INFO 03-02 01:14:19 [logger.py:42] Received request cmpl-7d56084161304d7ba04c0140a2a1ec6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:19 [async_llm.py:261] Added request cmpl-7d56084161304d7ba04c0140a2a1ec6e-0.
INFO 03-02 01:14:20 [logger.py:42] Received request cmpl-a35b554274654c13b3e2e6e59110a2b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:20 [async_llm.py:261] Added request cmpl-a35b554274654c13b3e2e6e59110a2b4-0.
INFO 03-02 01:14:21 [logger.py:42] Received request cmpl-4a0b31e642c2489c9b755efe7cfb2474-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:21 [async_llm.py:261] Added request cmpl-4a0b31e642c2489c9b755efe7cfb2474-0.
INFO 03-02 01:14:22 [logger.py:42] Received request cmpl-20eaa188abdc464a8efece66f6cd46bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:22 [async_llm.py:261] Added request cmpl-20eaa188abdc464a8efece66f6cd46bd-0.
INFO 03-02 01:14:24 [logger.py:42] Received request cmpl-525444a0e612456fae3bfddeafd17aa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:24 [async_llm.py:261] Added request cmpl-525444a0e612456fae3bfddeafd17aa4-0.
INFO 03-02 01:14:25 [logger.py:42] Received request cmpl-197133ab8410416884754638c175d8f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:25 [async_llm.py:261] Added request cmpl-197133ab8410416884754638c175d8f7-0.
INFO 03-02 01:14:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:26 [logger.py:42] Received request cmpl-045c0841b3924f1b97b33d19b441346f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:26 [async_llm.py:261] Added request cmpl-045c0841b3924f1b97b33d19b441346f-0.
INFO 03-02 01:14:27 [logger.py:42] Received request cmpl-41e7e04de45b497f992a4c4830dc66d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:27 [async_llm.py:261] Added request cmpl-41e7e04de45b497f992a4c4830dc66d0-0.
INFO 03-02 01:14:28 [logger.py:42] Received request cmpl-0c1579d6c42b40eba07f1ff3dd30a9d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:28 [async_llm.py:261] Added request cmpl-0c1579d6c42b40eba07f1ff3dd30a9d8-0.
INFO 03-02 01:14:29 [logger.py:42] Received request cmpl-5d2334fb9106465b842d14fba3c8d0f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:29 [async_llm.py:261] Added request cmpl-5d2334fb9106465b842d14fba3c8d0f0-0.
INFO 03-02 01:14:30 [logger.py:42] Received request cmpl-73afa7bdb33b491f9b8df7ca1396dee4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:30 [async_llm.py:261] Added request cmpl-73afa7bdb33b491f9b8df7ca1396dee4-0.
INFO 03-02 01:14:31 [logger.py:42] Received request cmpl-c2fa2c8639c043628e49ae7e314d57e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:31 [async_llm.py:261] Added request cmpl-c2fa2c8639c043628e49ae7e314d57e9-0.
INFO 03-02 01:14:32 [logger.py:42] Received request cmpl-4f4852e71a2a47edba58b4abbd9e8c41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:32 [async_llm.py:261] Added request cmpl-4f4852e71a2a47edba58b4abbd9e8c41-0.
INFO 03-02 01:14:33 [logger.py:42] Received request cmpl-797067cef67f4771b4cba68215b907d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:33 [async_llm.py:261] Added request cmpl-797067cef67f4771b4cba68215b907d9-0.
INFO 03-02 01:14:34 [logger.py:42] Received request cmpl-98dce99fdfc2448f85060cd76b540637-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:34 [async_llm.py:261] Added request cmpl-98dce99fdfc2448f85060cd76b540637-0.
INFO 03-02 01:14:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:36 [logger.py:42] Received request cmpl-89a2bb375c8847b988652710d0bab555-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:36 [async_llm.py:261] Added request cmpl-89a2bb375c8847b988652710d0bab555-0.
INFO 03-02 01:14:37 [logger.py:42] Received request cmpl-af658329a4a846c3bfb9f4bcb3df8cc3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:37 [async_llm.py:261] Added request cmpl-af658329a4a846c3bfb9f4bcb3df8cc3-0.
INFO 03-02 01:14:38 [logger.py:42] Received request cmpl-8fda6e77cc8c437598cc66ec0916fba1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:38 [async_llm.py:261] Added request cmpl-8fda6e77cc8c437598cc66ec0916fba1-0.
INFO 03-02 01:14:39 [logger.py:42] Received request cmpl-84f710bf5ced482a87342af1d429d963-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:39 [async_llm.py:261] Added request cmpl-84f710bf5ced482a87342af1d429d963-0.
INFO 03-02 01:14:40 [logger.py:42] Received request cmpl-625eb130e92241d09f61dad4e84e1e0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:40 [async_llm.py:261] Added request cmpl-625eb130e92241d09f61dad4e84e1e0a-0.
INFO 03-02 01:14:41 [logger.py:42] Received request cmpl-701f62bb0f4241039ed54cfa602a5ac3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:41 [async_llm.py:261] Added request cmpl-701f62bb0f4241039ed54cfa602a5ac3-0.
INFO 03-02 01:14:42 [logger.py:42] Received request cmpl-051c59b5f9914d5983a9353f15f435a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:42 [async_llm.py:261] Added request cmpl-051c59b5f9914d5983a9353f15f435a1-0.
INFO 03-02 01:14:43 [logger.py:42] Received request cmpl-35837f015cf0494592b79ba1dad84b5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:43 [async_llm.py:261] Added request cmpl-35837f015cf0494592b79ba1dad84b5f-0.
INFO 03-02 01:14:44 [logger.py:42] Received request cmpl-31a69d227c8b462abc546e96b4ef120c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:44 [async_llm.py:261] Added request cmpl-31a69d227c8b462abc546e96b4ef120c-0.
INFO 03-02 01:14:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:45 [logger.py:42] Received request cmpl-9ca81ef092db4e2fad1f8c0dacae0916-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:45 [async_llm.py:261] Added request cmpl-9ca81ef092db4e2fad1f8c0dacae0916-0.
INFO 03-02 01:14:47 [logger.py:42] Received request cmpl-ba26dd467d524483a8af1690e2a7238e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:47 [async_llm.py:261] Added request cmpl-ba26dd467d524483a8af1690e2a7238e-0.
INFO 03-02 01:14:48 [logger.py:42] Received request cmpl-32aa5cd123b9430ca03eecd365e97fbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:48 [async_llm.py:261] Added request cmpl-32aa5cd123b9430ca03eecd365e97fbc-0.
INFO 03-02 01:14:49 [logger.py:42] Received request cmpl-9b01e2cd53c741888e27cbe66071f6ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:49 [async_llm.py:261] Added request cmpl-9b01e2cd53c741888e27cbe66071f6ed-0.
INFO 03-02 01:14:50 [logger.py:42] Received request cmpl-8dd4d178bc6448f1a7449dca77a76002-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:50 [async_llm.py:261] Added request cmpl-8dd4d178bc6448f1a7449dca77a76002-0.
INFO 03-02 01:14:51 [logger.py:42] Received request cmpl-072d89d5a1814cde86c438510aee2788-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:51 [async_llm.py:261] Added request cmpl-072d89d5a1814cde86c438510aee2788-0.
INFO 03-02 01:14:52 [logger.py:42] Received request cmpl-26b59bdaec3c458596c769c997afef4a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:52 [async_llm.py:261] Added request cmpl-26b59bdaec3c458596c769c997afef4a-0.
INFO 03-02 01:14:53 [logger.py:42] Received request cmpl-3bf6cb787c42400989a6c1b0e394e100-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:53 [async_llm.py:261] Added request cmpl-3bf6cb787c42400989a6c1b0e394e100-0.
INFO 03-02 01:14:54 [logger.py:42] Received request cmpl-5c3cba59e428424c90c630f76f00122c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:54 [async_llm.py:261] Added request cmpl-5c3cba59e428424c90c630f76f00122c-0.
INFO 03-02 01:14:55 [logger.py:42] Received request cmpl-f8131da7671a486ab16eba9973bdb1c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:55 [async_llm.py:261] Added request cmpl-f8131da7671a486ab16eba9973bdb1c6-0.
INFO 03-02 01:14:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:14:56 [logger.py:42] Received request cmpl-15c14211747f46c194112efe3dfa3385-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:56 [async_llm.py:261] Added request cmpl-15c14211747f46c194112efe3dfa3385-0.
INFO 03-02 01:14:58 [logger.py:42] Received request cmpl-e4e0a7f243cd4fcba6d6373e5789ed9f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:58 [async_llm.py:261] Added request cmpl-e4e0a7f243cd4fcba6d6373e5789ed9f-0.
INFO 03-02 01:14:59 [logger.py:42] Received request cmpl-0184f1145bf847db84b6274ec6dab377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:14:59 [async_llm.py:261] Added request cmpl-0184f1145bf847db84b6274ec6dab377-0.
INFO 03-02 01:15:00 [logger.py:42] Received request cmpl-0c09b751fb1c4a21bd7310bff5d58155-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:00 [async_llm.py:261] Added request cmpl-0c09b751fb1c4a21bd7310bff5d58155-0.
INFO 03-02 01:15:01 [logger.py:42] Received request cmpl-b21a60cf82cf43be95a09eafb043b7d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:01 [async_llm.py:261] Added request cmpl-b21a60cf82cf43be95a09eafb043b7d9-0.
INFO 03-02 01:15:02 [logger.py:42] Received request cmpl-44cabf11fb1a4478b8a16b51bed2c990-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:02 [async_llm.py:261] Added request cmpl-44cabf11fb1a4478b8a16b51bed2c990-0.
INFO 03-02 01:15:03 [logger.py:42] Received request cmpl-0d35e40e01e44bfc9ee8b0f6b77ee33c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:03 [async_llm.py:261] Added request cmpl-0d35e40e01e44bfc9ee8b0f6b77ee33c-0.
INFO 03-02 01:15:04 [logger.py:42] Received request cmpl-e41a797550c84a00878502fc6ce78cae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:04 [async_llm.py:261] Added request cmpl-e41a797550c84a00878502fc6ce78cae-0.
INFO 03-02 01:15:05 [logger.py:42] Received request cmpl-14539909851748008ce4db983f1c97b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:05 [async_llm.py:261] Added request cmpl-14539909851748008ce4db983f1c97b7-0.
INFO 03-02 01:15:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:06 [logger.py:42] Received request cmpl-b7f96c17338d49d7ba88fe4034bad8af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:06 [async_llm.py:261] Added request cmpl-b7f96c17338d49d7ba88fe4034bad8af-0.
INFO 03-02 01:15:07 [logger.py:42] Received request cmpl-df9225c28c6542ca89d27fa6646a6154-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:07 [async_llm.py:261] Added request cmpl-df9225c28c6542ca89d27fa6646a6154-0.
INFO 03-02 01:15:08 [logger.py:42] Received request cmpl-fa7ec3dd06304274aa8102560442d4fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:08 [async_llm.py:261] Added request cmpl-fa7ec3dd06304274aa8102560442d4fa-0.
INFO 03-02 01:15:10 [logger.py:42] Received request cmpl-6b7a65e717c640a287492677ef97ca52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:10 [async_llm.py:261] Added request cmpl-6b7a65e717c640a287492677ef97ca52-0.
INFO 03-02 01:15:11 [logger.py:42] Received request cmpl-4e39d2812ccc42ccb30aa82abdda9d10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:11 [async_llm.py:261] Added request cmpl-4e39d2812ccc42ccb30aa82abdda9d10-0.
INFO 03-02 01:15:12 [logger.py:42] Received request cmpl-d2478753b303450dab80963ff570399e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:12 [async_llm.py:261] Added request cmpl-d2478753b303450dab80963ff570399e-0.
INFO 03-02 01:15:13 [logger.py:42] Received request cmpl-efee850eaf9a417a88a2a04f7b8e4345-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:13 [async_llm.py:261] Added request cmpl-efee850eaf9a417a88a2a04f7b8e4345-0.
INFO 03-02 01:15:14 [logger.py:42] Received request cmpl-73be77d1f321440dbf41426a9ffdba21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:14 [async_llm.py:261] Added request cmpl-73be77d1f321440dbf41426a9ffdba21-0.
INFO 03-02 01:15:15 [logger.py:42] Received request cmpl-6110e999f29e406e9b9b0db4f7f2ea37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:15 [async_llm.py:261] Added request cmpl-6110e999f29e406e9b9b0db4f7f2ea37-0.
INFO 03-02 01:15:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:16 [logger.py:42] Received request cmpl-5ca94d148d3d43e5b1d25da03f41b271-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:16 [async_llm.py:261] Added request cmpl-5ca94d148d3d43e5b1d25da03f41b271-0.
INFO 03-02 01:15:17 [logger.py:42] Received request cmpl-9a62a370f6eb47758ec6c3b147a6973b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:17 [async_llm.py:261] Added request cmpl-9a62a370f6eb47758ec6c3b147a6973b-0.
INFO 03-02 01:15:18 [logger.py:42] Received request cmpl-93081e99f5fa404db1010e16378f3d89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:18 [async_llm.py:261] Added request cmpl-93081e99f5fa404db1010e16378f3d89-0.
INFO 03-02 01:15:19 [logger.py:42] Received request cmpl-51dd6d3b1dc4471e85b404934e29549d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:19 [async_llm.py:261] Added request cmpl-51dd6d3b1dc4471e85b404934e29549d-0.
INFO 03-02 01:15:21 [logger.py:42] Received request cmpl-37770c8baf3449f1bff48e3dd5483ddc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:21 [async_llm.py:261] Added request cmpl-37770c8baf3449f1bff48e3dd5483ddc-0.
INFO 03-02 01:15:22 [logger.py:42] Received request cmpl-17bfbcd52f3d488fb2678642fcaffe4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:22 [async_llm.py:261] Added request cmpl-17bfbcd52f3d488fb2678642fcaffe4b-0.
INFO 03-02 01:15:23 [logger.py:42] Received request cmpl-60cf8d1acfc544a6a8a5e9641fbc8fda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:23 [async_llm.py:261] Added request cmpl-60cf8d1acfc544a6a8a5e9641fbc8fda-0.
INFO 03-02 01:15:24 [logger.py:42] Received request cmpl-eb1ba75867234cf68e7deab7160760e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:24 [async_llm.py:261] Added request cmpl-eb1ba75867234cf68e7deab7160760e3-0.
INFO 03-02 01:15:25 [logger.py:42] Received request cmpl-2e6dbe4c54784464aad80ceaf5a3a28d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:25 [async_llm.py:261] Added request cmpl-2e6dbe4c54784464aad80ceaf5a3a28d-0.
INFO 03-02 01:15:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:26 [logger.py:42] Received request cmpl-264bba5bdb8744fcbd677bbe51271e38-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:26 [async_llm.py:261] Added request cmpl-264bba5bdb8744fcbd677bbe51271e38-0.
INFO 03-02 01:15:27 [logger.py:42] Received request cmpl-b92ac90a63b14a1ab155c47b53485723-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:27 [async_llm.py:261] Added request cmpl-b92ac90a63b14a1ab155c47b53485723-0.
INFO 03-02 01:15:28 [logger.py:42] Received request cmpl-b27ed51fe63c4e789fcd54f9754c5620-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:28 [async_llm.py:261] Added request cmpl-b27ed51fe63c4e789fcd54f9754c5620-0.
INFO 03-02 01:15:29 [logger.py:42] Received request cmpl-6ceb7daf67c34e4e9bda543ac5966c1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:29 [async_llm.py:261] Added request cmpl-6ceb7daf67c34e4e9bda543ac5966c1d-0.
INFO 03-02 01:15:30 [logger.py:42] Received request cmpl-ff6ee37099044b319f25aca93b81ef4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:30 [async_llm.py:261] Added request cmpl-ff6ee37099044b319f25aca93b81ef4f-0.
INFO 03-02 01:15:31 [logger.py:42] Received request cmpl-04d00755ca8049d1b3afa33f526358d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:31 [async_llm.py:261] Added request cmpl-04d00755ca8049d1b3afa33f526358d7-0.
INFO 03-02 01:15:33 [logger.py:42] Received request cmpl-2f8f359d7ebb43f0a9f574bb416e66a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:33 [async_llm.py:261] Added request cmpl-2f8f359d7ebb43f0a9f574bb416e66a9-0.
INFO 03-02 01:15:34 [logger.py:42] Received request cmpl-39a382b19d654effacd9043403a6e749-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:34 [async_llm.py:261] Added request cmpl-39a382b19d654effacd9043403a6e749-0.
INFO 03-02 01:15:35 [logger.py:42] Received request cmpl-c79cc58e6c6a436ea9e21bd16ae6bcf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:35 [async_llm.py:261] Added request cmpl-c79cc58e6c6a436ea9e21bd16ae6bcf7-0.
INFO 03-02 01:15:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:36 [logger.py:42] Received request cmpl-df6cc124a756433cb03c21ee0bfb7904-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:36 [async_llm.py:261] Added request cmpl-df6cc124a756433cb03c21ee0bfb7904-0.
INFO 03-02 01:15:37 [logger.py:42] Received request cmpl-cce498cfc0cb44abbf7b3f3c5c918098-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:37 [async_llm.py:261] Added request cmpl-cce498cfc0cb44abbf7b3f3c5c918098-0.
INFO 03-02 01:15:38 [logger.py:42] Received request cmpl-cffadb06319f4dfaa9d3d8c8bb792fbf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:38 [async_llm.py:261] Added request cmpl-cffadb06319f4dfaa9d3d8c8bb792fbf-0.
INFO 03-02 01:15:39 [logger.py:42] Received request cmpl-4a6534f40c364b71a35b436394a975fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:39 [async_llm.py:261] Added request cmpl-4a6534f40c364b71a35b436394a975fe-0.
INFO 03-02 01:15:40 [logger.py:42] Received request cmpl-68336582b25b411cb783c5a454cec0ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:40 [async_llm.py:261] Added request cmpl-68336582b25b411cb783c5a454cec0ae-0.
INFO 03-02 01:15:41 [logger.py:42] Received request cmpl-26a56862be0e4bafb7f72329c6cad71c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:41 [async_llm.py:261] Added request cmpl-26a56862be0e4bafb7f72329c6cad71c-0.
INFO 03-02 01:15:42 [logger.py:42] Received request cmpl-f6a4589c31b14d418dd0bf3aa6bca8d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:42 [async_llm.py:261] Added request cmpl-f6a4589c31b14d418dd0bf3aa6bca8d2-0.
INFO 03-02 01:15:44 [logger.py:42] Received request cmpl-2a2062f33cc14e11b06133b84ee71ad6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:44 [async_llm.py:261] Added request cmpl-2a2062f33cc14e11b06133b84ee71ad6-0.
INFO 03-02 01:15:45 [logger.py:42] Received request cmpl-eb11852fe0c2432689220f310bf0ce21-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:45 [async_llm.py:261] Added request cmpl-eb11852fe0c2432689220f310bf0ce21-0.
INFO 03-02 01:15:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:46 [logger.py:42] Received request cmpl-005474b6050a40c38958bfecbf12bad1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:46 [async_llm.py:261] Added request cmpl-005474b6050a40c38958bfecbf12bad1-0.
INFO 03-02 01:15:47 [logger.py:42] Received request cmpl-4fd4eb17d30e4910a0f30939da5b156a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:47 [async_llm.py:261] Added request cmpl-4fd4eb17d30e4910a0f30939da5b156a-0.
INFO 03-02 01:15:48 [logger.py:42] Received request cmpl-dca9e9336b2e498299fe74ebb29630fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:48 [async_llm.py:261] Added request cmpl-dca9e9336b2e498299fe74ebb29630fc-0.
INFO 03-02 01:15:49 [logger.py:42] Received request cmpl-ccd25950374e4d8cb8ddcc4f42834d44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:49 [async_llm.py:261] Added request cmpl-ccd25950374e4d8cb8ddcc4f42834d44-0.
INFO 03-02 01:15:50 [logger.py:42] Received request cmpl-a58057df64924172bdd0c12ee7d5d87f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:50 [async_llm.py:261] Added request cmpl-a58057df64924172bdd0c12ee7d5d87f-0.
INFO 03-02 01:15:51 [logger.py:42] Received request cmpl-32c6641ac8ed45c0a72365d6087699f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:51 [async_llm.py:261] Added request cmpl-32c6641ac8ed45c0a72365d6087699f6-0.
INFO 03-02 01:15:52 [logger.py:42] Received request cmpl-4fc6c8637a844d0daf301890c0b146bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:52 [async_llm.py:261] Added request cmpl-4fc6c8637a844d0daf301890c0b146bf-0.
INFO 03-02 01:15:53 [logger.py:42] Received request cmpl-a0574d24f9b1477db238d951e2428438-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:53 [async_llm.py:261] Added request cmpl-a0574d24f9b1477db238d951e2428438-0.
INFO 03-02 01:15:55 [logger.py:42] Received request cmpl-2e8ac3c334d0466494715d49e3b26a5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:55 [async_llm.py:261] Added request cmpl-2e8ac3c334d0466494715d49e3b26a5d-0.
INFO 03-02 01:15:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:15:56 [logger.py:42] Received request cmpl-cc5fcac0b5ce4672a8a08c68c38c258f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:56 [async_llm.py:261] Added request cmpl-cc5fcac0b5ce4672a8a08c68c38c258f-0.
INFO 03-02 01:15:57 [logger.py:42] Received request cmpl-515e0da9e5da41b990867f318eb84746-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:57 [async_llm.py:261] Added request cmpl-515e0da9e5da41b990867f318eb84746-0.
INFO 03-02 01:15:58 [logger.py:42] Received request cmpl-b1cd42f088334c74b053465b3b38cadd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:58 [async_llm.py:261] Added request cmpl-b1cd42f088334c74b053465b3b38cadd-0.
INFO 03-02 01:15:59 [logger.py:42] Received request cmpl-b82160a55f124b839b04813a79ae0cc4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:15:59 [async_llm.py:261] Added request cmpl-b82160a55f124b839b04813a79ae0cc4-0.
INFO 03-02 01:16:00 [logger.py:42] Received request cmpl-d22f8844a30c4a4cbb07c309f7b87b4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:00 [async_llm.py:261] Added request cmpl-d22f8844a30c4a4cbb07c309f7b87b4f-0.
INFO 03-02 01:16:01 [logger.py:42] Received request cmpl-a069fbd8d0184ca78ac3d86ef14ab5e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:01 [async_llm.py:261] Added request cmpl-a069fbd8d0184ca78ac3d86ef14ab5e1-0.
INFO 03-02 01:16:02 [logger.py:42] Received request cmpl-a4f7eb08fefd4b4c8c90b64c52c3d65b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:02 [async_llm.py:261] Added request cmpl-a4f7eb08fefd4b4c8c90b64c52c3d65b-0.
INFO 03-02 01:16:03 [logger.py:42] Received request cmpl-e632cc9e992f4ffd88e54b6d029cc507-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:03 [async_llm.py:261] Added request cmpl-e632cc9e992f4ffd88e54b6d029cc507-0.
INFO 03-02 01:16:04 [logger.py:42] Received request cmpl-da88807717bc4904bd907e5a64fcee2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:04 [async_llm.py:261] Added request cmpl-da88807717bc4904bd907e5a64fcee2a-0.
INFO 03-02 01:16:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:05 [logger.py:42] Received request cmpl-d948aad6a39140838c5af0eb7fb0ea7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:05 [async_llm.py:261] Added request cmpl-d948aad6a39140838c5af0eb7fb0ea7b-0.
INFO 03-02 01:16:07 [logger.py:42] Received request cmpl-4fd16468af4f4d7780e1c52f641b60ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:07 [async_llm.py:261] Added request cmpl-4fd16468af4f4d7780e1c52f641b60ae-0.
INFO 03-02 01:16:08 [logger.py:42] Received request cmpl-6aec474c897044aebce62fb6aa47ed9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:08 [async_llm.py:261] Added request cmpl-6aec474c897044aebce62fb6aa47ed9a-0.
INFO 03-02 01:16:09 [logger.py:42] Received request cmpl-a55e960fd77d437ca1335c7d825da739-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:09 [async_llm.py:261] Added request cmpl-a55e960fd77d437ca1335c7d825da739-0.
INFO 03-02 01:16:10 [logger.py:42] Received request cmpl-9d8dd8e40690428caca742fb9b6518fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:10 [async_llm.py:261] Added request cmpl-9d8dd8e40690428caca742fb9b6518fd-0.
INFO 03-02 01:16:11 [logger.py:42] Received request cmpl-f74c4cd02ae34739a1e974325122cc73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:11 [async_llm.py:261] Added request cmpl-f74c4cd02ae34739a1e974325122cc73-0.
INFO 03-02 01:16:12 [logger.py:42] Received request cmpl-2b3b74a63fc7439e83e4c4fcd9fcbe76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:12 [async_llm.py:261] Added request cmpl-2b3b74a63fc7439e83e4c4fcd9fcbe76-0.
INFO 03-02 01:16:13 [logger.py:42] Received request cmpl-b2b36a6317684b7b99c99d0e8749a51e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:13 [async_llm.py:261] Added request cmpl-b2b36a6317684b7b99c99d0e8749a51e-0.
INFO 03-02 01:16:14 [logger.py:42] Received request cmpl-b941d093999148569053537e159de08b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:14 [async_llm.py:261] Added request cmpl-b941d093999148569053537e159de08b-0.
INFO 03-02 01:16:15 [logger.py:42] Received request cmpl-e2211f2360624f2eb43235734606b1c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:15 [async_llm.py:261] Added request cmpl-e2211f2360624f2eb43235734606b1c8-0.
INFO 03-02 01:16:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:16 [logger.py:42] Received request cmpl-b3b0133e8a484513919bab54e28546aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:16 [async_llm.py:261] Added request cmpl-b3b0133e8a484513919bab54e28546aa-0.
INFO 03-02 01:16:18 [logger.py:42] Received request cmpl-40c89ef47c474ce484446fbf4a87f726-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:18 [async_llm.py:261] Added request cmpl-40c89ef47c474ce484446fbf4a87f726-0.
INFO 03-02 01:16:19 [logger.py:42] Received request cmpl-a12ab56d335644d5820ea386058ca5bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:19 [async_llm.py:261] Added request cmpl-a12ab56d335644d5820ea386058ca5bb-0.
INFO 03-02 01:16:20 [logger.py:42] Received request cmpl-e39fca8ef69240cabda02de299b0a393-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:20 [async_llm.py:261] Added request cmpl-e39fca8ef69240cabda02de299b0a393-0.
INFO 03-02 01:16:21 [logger.py:42] Received request cmpl-ab6b8efde7ed4b5baa2321ec69154585-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:21 [async_llm.py:261] Added request cmpl-ab6b8efde7ed4b5baa2321ec69154585-0.
INFO 03-02 01:16:22 [logger.py:42] Received request cmpl-f689834b44064594b04859c3fe733db9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:22 [async_llm.py:261] Added request cmpl-f689834b44064594b04859c3fe733db9-0.
INFO 03-02 01:16:23 [logger.py:42] Received request cmpl-462376d7f4184a97b9f97dfa4dd35ba7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:23 [async_llm.py:261] Added request cmpl-462376d7f4184a97b9f97dfa4dd35ba7-0.
INFO 03-02 01:16:24 [logger.py:42] Received request cmpl-c24127ac871d4a33a053cf57b66c7944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:24 [async_llm.py:261] Added request cmpl-c24127ac871d4a33a053cf57b66c7944-0.
INFO 03-02 01:16:25 [logger.py:42] Received request cmpl-8966370d5d4c44c097097d9f2e101e45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:25 [async_llm.py:261] Added request cmpl-8966370d5d4c44c097097d9f2e101e45-0.
INFO 03-02 01:16:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:26 [logger.py:42] Received request cmpl-842a342e541949f281f8a0ac370a562b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:26 [async_llm.py:261] Added request cmpl-842a342e541949f281f8a0ac370a562b-0.
INFO 03-02 01:16:27 [logger.py:42] Received request cmpl-60b2dcf45b034d8d814ef7e43fc3470c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:27 [async_llm.py:261] Added request cmpl-60b2dcf45b034d8d814ef7e43fc3470c-0.
INFO 03-02 01:16:28 [logger.py:42] Received request cmpl-d10cae158b944c4bab3d382ccc7095b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:28 [async_llm.py:261] Added request cmpl-d10cae158b944c4bab3d382ccc7095b3-0.
INFO 03-02 01:16:30 [logger.py:42] Received request cmpl-b8e99f8f77934a91afb83d4af00e04d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:30 [async_llm.py:261] Added request cmpl-b8e99f8f77934a91afb83d4af00e04d2-0.
INFO 03-02 01:16:31 [logger.py:42] Received request cmpl-83e4ebc6484d41909eea32b09ca4bbd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:31 [async_llm.py:261] Added request cmpl-83e4ebc6484d41909eea32b09ca4bbd7-0.
INFO 03-02 01:16:32 [logger.py:42] Received request cmpl-ab399c2e23984c768c567b4603470650-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:32 [async_llm.py:261] Added request cmpl-ab399c2e23984c768c567b4603470650-0.
INFO 03-02 01:16:33 [logger.py:42] Received request cmpl-4e85eee3abac4d0298707590084839b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:33 [async_llm.py:261] Added request cmpl-4e85eee3abac4d0298707590084839b7-0.
INFO 03-02 01:16:34 [logger.py:42] Received request cmpl-350133d6b74340f391d9122f6cc661d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:34 [async_llm.py:261] Added request cmpl-350133d6b74340f391d9122f6cc661d2-0.
INFO 03-02 01:16:35 [logger.py:42] Received request cmpl-5248a1c381ef44059ba3d55e0d4440b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:35 [async_llm.py:261] Added request cmpl-5248a1c381ef44059ba3d55e0d4440b5-0.
INFO 03-02 01:16:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:36 [logger.py:42] Received request cmpl-d72e6040cee94fe3a53617eaf6164203-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:36 [async_llm.py:261] Added request cmpl-d72e6040cee94fe3a53617eaf6164203-0.
INFO 03-02 01:16:37 [logger.py:42] Received request cmpl-5def50bbbe7d4dd09762b8fe86e354f9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:37 [async_llm.py:261] Added request cmpl-5def50bbbe7d4dd09762b8fe86e354f9-0.
INFO 03-02 01:16:38 [logger.py:42] Received request cmpl-6a4542ea8da94bbbb4ca1c565a1e18b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:38 [async_llm.py:261] Added request cmpl-6a4542ea8da94bbbb4ca1c565a1e18b9-0.
INFO 03-02 01:16:39 [logger.py:42] Received request cmpl-73f71f38a9fd41adaff50e1ff842add9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:39 [async_llm.py:261] Added request cmpl-73f71f38a9fd41adaff50e1ff842add9-0.
INFO 03-02 01:16:41 [logger.py:42] Received request cmpl-b73e61ad02a24adcbe98a77428ff9977-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:41 [async_llm.py:261] Added request cmpl-b73e61ad02a24adcbe98a77428ff9977-0.
INFO 03-02 01:16:42 [logger.py:42] Received request cmpl-2051c89484084638b68e9f183a4e53a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:42 [async_llm.py:261] Added request cmpl-2051c89484084638b68e9f183a4e53a3-0.
INFO 03-02 01:16:43 [logger.py:42] Received request cmpl-4c2fcedd9f224f19931740327753946e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:43 [async_llm.py:261] Added request cmpl-4c2fcedd9f224f19931740327753946e-0.
INFO 03-02 01:16:44 [logger.py:42] Received request cmpl-93231d7f58304a5e9425e407a192f9cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:44 [async_llm.py:261] Added request cmpl-93231d7f58304a5e9425e407a192f9cc-0.
INFO 03-02 01:16:45 [logger.py:42] Received request cmpl-56cd0f16817943eb9a6f0d16613f523d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:45 [async_llm.py:261] Added request cmpl-56cd0f16817943eb9a6f0d16613f523d-0.
INFO 03-02 01:16:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:46 [logger.py:42] Received request cmpl-2fc15724b2244b0f9d6c50cb480a4e8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:46 [async_llm.py:261] Added request cmpl-2fc15724b2244b0f9d6c50cb480a4e8f-0.
INFO 03-02 01:16:47 [logger.py:42] Received request cmpl-0887e16a9c424d599f6142b936a346b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:47 [async_llm.py:261] Added request cmpl-0887e16a9c424d599f6142b936a346b6-0.
INFO 03-02 01:16:48 [logger.py:42] Received request cmpl-66c999daf8c84452bdbedc6832caaeec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:48 [async_llm.py:261] Added request cmpl-66c999daf8c84452bdbedc6832caaeec-0.
INFO 03-02 01:16:49 [logger.py:42] Received request cmpl-0c72dae73b2f459e9be5ae23277f2a5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:49 [async_llm.py:261] Added request cmpl-0c72dae73b2f459e9be5ae23277f2a5a-0.
INFO 03-02 01:16:50 [logger.py:42] Received request cmpl-5bc923b4519c469a8ee876474e46f2fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:50 [async_llm.py:261] Added request cmpl-5bc923b4519c469a8ee876474e46f2fa-0.
INFO 03-02 01:16:52 [logger.py:42] Received request cmpl-1cc643e1633d4346a4e55cf03d8e1802-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:52 [async_llm.py:261] Added request cmpl-1cc643e1633d4346a4e55cf03d8e1802-0.
INFO 03-02 01:16:53 [logger.py:42] Received request cmpl-c3c0277e0a464e4399aa8dfcf415c20e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:53 [async_llm.py:261] Added request cmpl-c3c0277e0a464e4399aa8dfcf415c20e-0.
INFO 03-02 01:16:54 [logger.py:42] Received request cmpl-fe9bdcc62a254314ad164e83e53a2520-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:54 [async_llm.py:261] Added request cmpl-fe9bdcc62a254314ad164e83e53a2520-0.
INFO 03-02 01:16:55 [logger.py:42] Received request cmpl-5ba0299a92a94982b58bfde3dcbd344d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:55 [async_llm.py:261] Added request cmpl-5ba0299a92a94982b58bfde3dcbd344d-0.
INFO 03-02 01:16:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:16:56 [logger.py:42] Received request cmpl-76629cd1b0fe4211bf38d898e84ab719-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:56 [async_llm.py:261] Added request cmpl-76629cd1b0fe4211bf38d898e84ab719-0.
INFO 03-02 01:16:57 [logger.py:42] Received request cmpl-26d57d96a44d4e878b212618d645ada4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:57 [async_llm.py:261] Added request cmpl-26d57d96a44d4e878b212618d645ada4-0.
INFO 03-02 01:16:58 [logger.py:42] Received request cmpl-cb6451c3fc5749499f7dad8f3f43b530-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:58 [async_llm.py:261] Added request cmpl-cb6451c3fc5749499f7dad8f3f43b530-0.
INFO 03-02 01:16:59 [logger.py:42] Received request cmpl-a19bbb1072144e2d91fbf67f8a82e957-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:16:59 [async_llm.py:261] Added request cmpl-a19bbb1072144e2d91fbf67f8a82e957-0.
INFO 03-02 01:17:00 [logger.py:42] Received request cmpl-203f945d8e8b441a9ea8b7efa6beac8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:00 [async_llm.py:261] Added request cmpl-203f945d8e8b441a9ea8b7efa6beac8f-0.
INFO 03-02 01:17:01 [logger.py:42] Received request cmpl-38e0470caa544defb4abad216ea65ac4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:01 [async_llm.py:261] Added request cmpl-38e0470caa544defb4abad216ea65ac4-0.
INFO 03-02 01:17:02 [logger.py:42] Received request cmpl-f99c87c694c34ad6a17cda3f9fb44916-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:02 [async_llm.py:261] Added request cmpl-f99c87c694c34ad6a17cda3f9fb44916-0.
INFO 03-02 01:17:04 [logger.py:42] Received request cmpl-da09b8be1caf44468eee9baaae5ada47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:04 [async_llm.py:261] Added request cmpl-da09b8be1caf44468eee9baaae5ada47-0.
INFO 03-02 01:17:05 [logger.py:42] Received request cmpl-55ef7b8066974c18823746bf22b790b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:05 [async_llm.py:261] Added request cmpl-55ef7b8066974c18823746bf22b790b4-0.
INFO 03-02 01:17:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:06 [logger.py:42] Received request cmpl-4ba9913c406848999b2c48869275911c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:06 [async_llm.py:261] Added request cmpl-4ba9913c406848999b2c48869275911c-0.
INFO 03-02 01:17:07 [logger.py:42] Received request cmpl-4328f062668141a19605ae3a5ac1da2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:07 [async_llm.py:261] Added request cmpl-4328f062668141a19605ae3a5ac1da2e-0.
INFO 03-02 01:17:08 [logger.py:42] Received request cmpl-35c1bcd4ed1149719a351bc98dfeec25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:08 [async_llm.py:261] Added request cmpl-35c1bcd4ed1149719a351bc98dfeec25-0.
INFO 03-02 01:17:09 [logger.py:42] Received request cmpl-31022ce209174181bb70a1cec0a51905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:09 [async_llm.py:261] Added request cmpl-31022ce209174181bb70a1cec0a51905-0.
INFO 03-02 01:17:10 [logger.py:42] Received request cmpl-28a8223caf3d4cefa25ea67e9878191d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:10 [async_llm.py:261] Added request cmpl-28a8223caf3d4cefa25ea67e9878191d-0.
INFO 03-02 01:17:11 [logger.py:42] Received request cmpl-6681c6fd84384cbf8bd8d7795ba9bf03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:11 [async_llm.py:261] Added request cmpl-6681c6fd84384cbf8bd8d7795ba9bf03-0.
INFO 03-02 01:17:12 [logger.py:42] Received request cmpl-19e6be8c30154ac0b2e86036c1436cd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:12 [async_llm.py:261] Added request cmpl-19e6be8c30154ac0b2e86036c1436cd4-0.
INFO 03-02 01:17:13 [logger.py:42] Received request cmpl-9f8696621e104092893f8882ae2c5a5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:13 [async_llm.py:261] Added request cmpl-9f8696621e104092893f8882ae2c5a5f-0.
INFO 03-02 01:17:15 [logger.py:42] Received request cmpl-bbfa1a03eec84961ab5f9bebbb05a165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:15 [async_llm.py:261] Added request cmpl-bbfa1a03eec84961ab5f9bebbb05a165-0.
INFO 03-02 01:17:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:16 [logger.py:42] Received request cmpl-347ab6ee4c3c470090d7ee5cb1796c16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:16 [async_llm.py:261] Added request cmpl-347ab6ee4c3c470090d7ee5cb1796c16-0.
INFO 03-02 01:17:17 [logger.py:42] Received request cmpl-2953c45efa364620acf4f60f66e202c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:17 [async_llm.py:261] Added request cmpl-2953c45efa364620acf4f60f66e202c8-0.
INFO 03-02 01:17:18 [logger.py:42] Received request cmpl-004880e843aa4d5489725512e7c19209-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:18 [async_llm.py:261] Added request cmpl-004880e843aa4d5489725512e7c19209-0.
INFO 03-02 01:17:19 [logger.py:42] Received request cmpl-e155cb0bbc594ed991c0abb1f9ce738f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:19 [async_llm.py:261] Added request cmpl-e155cb0bbc594ed991c0abb1f9ce738f-0.
INFO 03-02 01:17:20 [logger.py:42] Received request cmpl-7fc14fbc1cc0486b98a843882a13a511-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:20 [async_llm.py:261] Added request cmpl-7fc14fbc1cc0486b98a843882a13a511-0.
INFO 03-02 01:17:21 [logger.py:42] Received request cmpl-9a7f7e28fd634262899ff692056dfd1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:21 [async_llm.py:261] Added request cmpl-9a7f7e28fd634262899ff692056dfd1a-0.
INFO 03-02 01:17:22 [logger.py:42] Received request cmpl-e5bdf595cf3943489f01e4f5b2ab1e05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:22 [async_llm.py:261] Added request cmpl-e5bdf595cf3943489f01e4f5b2ab1e05-0.
INFO 03-02 01:17:23 [logger.py:42] Received request cmpl-f92c46b68df94d8c94508a5d5b3020ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:23 [async_llm.py:261] Added request cmpl-f92c46b68df94d8c94508a5d5b3020ce-0.
INFO 03-02 01:17:24 [logger.py:42] Received request cmpl-aac155dc53be4a68bdccf73b079c0aa8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:24 [async_llm.py:261] Added request cmpl-aac155dc53be4a68bdccf73b079c0aa8-0.
INFO 03-02 01:17:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:25 [logger.py:42] Received request cmpl-6a338962cede48da84b93cdf97e74598-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:25 [async_llm.py:261] Added request cmpl-6a338962cede48da84b93cdf97e74598-0.
INFO 03-02 01:17:27 [logger.py:42] Received request cmpl-8ec539b5b5034be9823c84a7973fd4d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:27 [async_llm.py:261] Added request cmpl-8ec539b5b5034be9823c84a7973fd4d5-0.
INFO 03-02 01:17:28 [logger.py:42] Received request cmpl-8e4200d8fd0143c989634a7c121af9a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:28 [async_llm.py:261] Added request cmpl-8e4200d8fd0143c989634a7c121af9a4-0.
INFO 03-02 01:17:29 [logger.py:42] Received request cmpl-e3af0428fc854f90850c604a9c86c970-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:29 [async_llm.py:261] Added request cmpl-e3af0428fc854f90850c604a9c86c970-0.
INFO 03-02 01:17:30 [logger.py:42] Received request cmpl-cb10faae49a944188ed6bb7f79d5d708-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:30 [async_llm.py:261] Added request cmpl-cb10faae49a944188ed6bb7f79d5d708-0.
INFO 03-02 01:17:31 [logger.py:42] Received request cmpl-526076a0d7a14aa78bd1e5819d9daf89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:31 [async_llm.py:261] Added request cmpl-526076a0d7a14aa78bd1e5819d9daf89-0.
INFO 03-02 01:17:32 [logger.py:42] Received request cmpl-24d6c240f9f34c908b2d206b451cdd45-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:32 [async_llm.py:261] Added request cmpl-24d6c240f9f34c908b2d206b451cdd45-0.
INFO 03-02 01:17:33 [logger.py:42] Received request cmpl-f4edd5e9f7bf4e3fa313342040f93b25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:33 [async_llm.py:261] Added request cmpl-f4edd5e9f7bf4e3fa313342040f93b25-0.
INFO 03-02 01:17:34 [logger.py:42] Received request cmpl-ff30e63629fa478bbffe16f00bbf54f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:34 [async_llm.py:261] Added request cmpl-ff30e63629fa478bbffe16f00bbf54f8-0.
INFO 03-02 01:17:35 [logger.py:42] Received request cmpl-74855f64ddd44c3586bef5145dc80b1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:35 [async_llm.py:261] Added request cmpl-74855f64ddd44c3586bef5145dc80b1c-0.
INFO 03-02 01:17:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:36 [logger.py:42] Received request cmpl-ba68dd0d39e9478187a5359a6e54d919-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:36 [async_llm.py:261] Added request cmpl-ba68dd0d39e9478187a5359a6e54d919-0.
INFO 03-02 01:17:38 [logger.py:42] Received request cmpl-9329be00370b4a4982c75fbf6f7808ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:38 [async_llm.py:261] Added request cmpl-9329be00370b4a4982c75fbf6f7808ea-0.
INFO 03-02 01:17:39 [logger.py:42] Received request cmpl-c043263fd07e4bcb81bc9178aa8e4f58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:39 [async_llm.py:261] Added request cmpl-c043263fd07e4bcb81bc9178aa8e4f58-0.
INFO 03-02 01:17:40 [logger.py:42] Received request cmpl-76874c38a195484ca6da7c842ed49d58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:40 [async_llm.py:261] Added request cmpl-76874c38a195484ca6da7c842ed49d58-0.
INFO 03-02 01:17:41 [logger.py:42] Received request cmpl-fd34079877d7418f938095e929ed73d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:41 [async_llm.py:261] Added request cmpl-fd34079877d7418f938095e929ed73d8-0.
INFO 03-02 01:17:42 [logger.py:42] Received request cmpl-c19d07e70a844548b4fb64b79247625a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:42 [async_llm.py:261] Added request cmpl-c19d07e70a844548b4fb64b79247625a-0.
INFO 03-02 01:17:43 [logger.py:42] Received request cmpl-040c780e86854cca9386993d950a1969-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:43 [async_llm.py:261] Added request cmpl-040c780e86854cca9386993d950a1969-0.
INFO 03-02 01:17:44 [logger.py:42] Received request cmpl-64b2b786c860436686473453e2e57933-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:44 [async_llm.py:261] Added request cmpl-64b2b786c860436686473453e2e57933-0.
INFO 03-02 01:17:45 [logger.py:42] Received request cmpl-a127a20305ec43f5a8745d369af86b87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:45 [async_llm.py:261] Added request cmpl-a127a20305ec43f5a8745d369af86b87-0.
INFO 03-02 01:17:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:46 [logger.py:42] Received request cmpl-487f3949f98f4852bc5015b3f42b34a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:46 [async_llm.py:261] Added request cmpl-487f3949f98f4852bc5015b3f42b34a2-0.
INFO 03-02 01:17:47 [logger.py:42] Received request cmpl-ea33ca16dec94c7484640d1fb8f1aca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:47 [async_llm.py:261] Added request cmpl-ea33ca16dec94c7484640d1fb8f1aca6-0.
INFO 03-02 01:17:49 [logger.py:42] Received request cmpl-de3cc52eacd2473ea364e972cbf33597-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:49 [async_llm.py:261] Added request cmpl-de3cc52eacd2473ea364e972cbf33597-0.
INFO 03-02 01:17:50 [logger.py:42] Received request cmpl-8095855116c14237b61d519ba0e8911c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:50 [async_llm.py:261] Added request cmpl-8095855116c14237b61d519ba0e8911c-0.
INFO 03-02 01:17:51 [logger.py:42] Received request cmpl-9cd0b86c09574962bff45c4ed0e78dbd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:51 [async_llm.py:261] Added request cmpl-9cd0b86c09574962bff45c4ed0e78dbd-0.
INFO 03-02 01:17:52 [logger.py:42] Received request cmpl-af4dfdbd9d7a4529b3b7819b66675144-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:52 [async_llm.py:261] Added request cmpl-af4dfdbd9d7a4529b3b7819b66675144-0.
INFO 03-02 01:17:53 [logger.py:42] Received request cmpl-22d1e83a4e5a43a4908ea2f998542a52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:53 [async_llm.py:261] Added request cmpl-22d1e83a4e5a43a4908ea2f998542a52-0.
INFO 03-02 01:17:54 [logger.py:42] Received request cmpl-b02cbcd0943f45dc8e576fe5435ea4f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:54 [async_llm.py:261] Added request cmpl-b02cbcd0943f45dc8e576fe5435ea4f1-0.
INFO 03-02 01:17:55 [logger.py:42] Received request cmpl-e179f440d89547e1a60ea9738c10386b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:55 [async_llm.py:261] Added request cmpl-e179f440d89547e1a60ea9738c10386b-0.
INFO 03-02 01:17:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:17:56 [logger.py:42] Received request cmpl-9d8b31d07b514a88bccaf57dc2c972fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:56 [async_llm.py:261] Added request cmpl-9d8b31d07b514a88bccaf57dc2c972fb-0.
INFO 03-02 01:17:57 [logger.py:42] Received request cmpl-90e88b713d6946b68a085624f02dbd5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:57 [async_llm.py:261] Added request cmpl-90e88b713d6946b68a085624f02dbd5d-0.
INFO 03-02 01:17:58 [logger.py:42] Received request cmpl-d7d998a107cc4564b6d0a97b27e9a170-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:58 [async_llm.py:261] Added request cmpl-d7d998a107cc4564b6d0a97b27e9a170-0.
INFO 03-02 01:17:59 [logger.py:42] Received request cmpl-be1868d5508e458d9ddea52f4ff29bd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:17:59 [async_llm.py:261] Added request cmpl-be1868d5508e458d9ddea52f4ff29bd2-0.
INFO 03-02 01:18:01 [logger.py:42] Received request cmpl-0be181d7986445ffad3825c86de13e98-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:01 [async_llm.py:261] Added request cmpl-0be181d7986445ffad3825c86de13e98-0.
INFO 03-02 01:18:02 [logger.py:42] Received request cmpl-f89d561a7d7743b59f788adc6e360b2b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:02 [async_llm.py:261] Added request cmpl-f89d561a7d7743b59f788adc6e360b2b-0.
INFO 03-02 01:18:03 [logger.py:42] Received request cmpl-406c0ef6c9da447cbb52a0537da363de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:03 [async_llm.py:261] Added request cmpl-406c0ef6c9da447cbb52a0537da363de-0.
INFO 03-02 01:18:04 [logger.py:42] Received request cmpl-c318e34521ad409e8bf1682a2aeb1cff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:04 [async_llm.py:261] Added request cmpl-c318e34521ad409e8bf1682a2aeb1cff-0.
INFO 03-02 01:18:05 [logger.py:42] Received request cmpl-e6d7f22575a44ce0be3028e435723740-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:05 [async_llm.py:261] Added request cmpl-e6d7f22575a44ce0be3028e435723740-0.
INFO 03-02 01:18:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:06 [logger.py:42] Received request cmpl-80d7888288bd4b4da420ab557c9d2412-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:06 [async_llm.py:261] Added request cmpl-80d7888288bd4b4da420ab557c9d2412-0.
INFO 03-02 01:18:07 [logger.py:42] Received request cmpl-0dd83526f8544e61b9a2282a0cb6f5ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:07 [async_llm.py:261] Added request cmpl-0dd83526f8544e61b9a2282a0cb6f5ec-0.
INFO 03-02 01:18:08 [logger.py:42] Received request cmpl-82fa98d3fb304e7798c3e8aa7113ed9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:08 [async_llm.py:261] Added request cmpl-82fa98d3fb304e7798c3e8aa7113ed9b-0.
INFO 03-02 01:18:09 [logger.py:42] Received request cmpl-bdeebacfa1d24b4ea9c89c04466c8650-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:09 [async_llm.py:261] Added request cmpl-bdeebacfa1d24b4ea9c89c04466c8650-0.
INFO 03-02 01:18:10 [logger.py:42] Received request cmpl-7c62042ebaf04f92bd9f8bfb7a07e655-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:10 [async_llm.py:261] Added request cmpl-7c62042ebaf04f92bd9f8bfb7a07e655-0.
INFO 03-02 01:18:12 [logger.py:42] Received request cmpl-163234ba142b43cbadceb4c07fde7179-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:12 [async_llm.py:261] Added request cmpl-163234ba142b43cbadceb4c07fde7179-0.
INFO 03-02 01:18:13 [logger.py:42] Received request cmpl-e645a5d71d15480896b3fbed8721d6c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:13 [async_llm.py:261] Added request cmpl-e645a5d71d15480896b3fbed8721d6c3-0.
INFO 03-02 01:18:14 [logger.py:42] Received request cmpl-1313ecccfcc04d0fbeb86068c8df96df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:14 [async_llm.py:261] Added request cmpl-1313ecccfcc04d0fbeb86068c8df96df-0.
INFO 03-02 01:18:15 [logger.py:42] Received request cmpl-ab8de137e8d14d04b456389ae1aa7d36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:15 [async_llm.py:261] Added request cmpl-ab8de137e8d14d04b456389ae1aa7d36-0.
INFO 03-02 01:18:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:16 [logger.py:42] Received request cmpl-623157e6fa0e4adb9924e520c535126a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:16 [async_llm.py:261] Added request cmpl-623157e6fa0e4adb9924e520c535126a-0.
INFO 03-02 01:18:17 [logger.py:42] Received request cmpl-b76205a5258b47c3bfc7223a2e9312ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:17 [async_llm.py:261] Added request cmpl-b76205a5258b47c3bfc7223a2e9312ce-0.
INFO 03-02 01:18:18 [logger.py:42] Received request cmpl-a44013326d53400c957b518f5f6a5bfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:18 [async_llm.py:261] Added request cmpl-a44013326d53400c957b518f5f6a5bfb-0.
INFO 03-02 01:18:19 [logger.py:42] Received request cmpl-be02664b85724081ae6023608a2f5b36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:19 [async_llm.py:261] Added request cmpl-be02664b85724081ae6023608a2f5b36-0.
INFO 03-02 01:18:20 [logger.py:42] Received request cmpl-401669e9b4be4fd689629fd0b9634796-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:20 [async_llm.py:261] Added request cmpl-401669e9b4be4fd689629fd0b9634796-0.
INFO 03-02 01:18:21 [logger.py:42] Received request cmpl-8cf56c1dc880443989cc5317b20be40d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:21 [async_llm.py:261] Added request cmpl-8cf56c1dc880443989cc5317b20be40d-0.
INFO 03-02 01:18:22 [logger.py:42] Received request cmpl-40673bf8f2954b91ab5e06ad85df5701-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:23 [async_llm.py:261] Added request cmpl-40673bf8f2954b91ab5e06ad85df5701-0.
INFO 03-02 01:18:24 [logger.py:42] Received request cmpl-fdf08ec7c2704abda0b7ad1170d38199-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:24 [async_llm.py:261] Added request cmpl-fdf08ec7c2704abda0b7ad1170d38199-0.
INFO 03-02 01:18:25 [logger.py:42] Received request cmpl-eaeee2b418e240b08e1ea17763863e7b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:25 [async_llm.py:261] Added request cmpl-eaeee2b418e240b08e1ea17763863e7b-0.
INFO 03-02 01:18:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:26 [logger.py:42] Received request cmpl-ac50a0ddfdaa490facf0b6103ce2b97a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:26 [async_llm.py:261] Added request cmpl-ac50a0ddfdaa490facf0b6103ce2b97a-0.
INFO 03-02 01:18:27 [logger.py:42] Received request cmpl-4d892f876a724994a51cf53456c80e9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:27 [async_llm.py:261] Added request cmpl-4d892f876a724994a51cf53456c80e9c-0.
INFO 03-02 01:18:28 [logger.py:42] Received request cmpl-4b1806266fbe496595c186f2458520b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:28 [async_llm.py:261] Added request cmpl-4b1806266fbe496595c186f2458520b5-0.
INFO 03-02 01:18:29 [logger.py:42] Received request cmpl-d5694293ab6644a8b83480211731a527-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:29 [async_llm.py:261] Added request cmpl-d5694293ab6644a8b83480211731a527-0.
INFO 03-02 01:18:30 [logger.py:42] Received request cmpl-9a6c77bbf88b44779e2b5e54632e1980-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:30 [async_llm.py:261] Added request cmpl-9a6c77bbf88b44779e2b5e54632e1980-0.
INFO 03-02 01:18:31 [logger.py:42] Received request cmpl-a539c13b1f3445908fe305e775c88f10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:31 [async_llm.py:261] Added request cmpl-a539c13b1f3445908fe305e775c88f10-0.
INFO 03-02 01:18:32 [logger.py:42] Received request cmpl-34b7ddffe40544839bd011ad11a9d2e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:32 [async_llm.py:261] Added request cmpl-34b7ddffe40544839bd011ad11a9d2e8-0.
INFO 03-02 01:18:33 [logger.py:42] Received request cmpl-c2ff961b87b44e35bb1495b822ccfbdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:33 [async_llm.py:261] Added request cmpl-c2ff961b87b44e35bb1495b822ccfbdd-0.
INFO 03-02 01:18:35 [logger.py:42] Received request cmpl-75546c9fe56b4409a51acbcd4bb88ddc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:35 [async_llm.py:261] Added request cmpl-75546c9fe56b4409a51acbcd4bb88ddc-0.
INFO 03-02 01:18:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:36 [logger.py:42] Received request cmpl-ae3b6a8170f346918f20cbf62761e6ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:36 [async_llm.py:261] Added request cmpl-ae3b6a8170f346918f20cbf62761e6ca-0.
INFO 03-02 01:18:37 [logger.py:42] Received request cmpl-1cff7fd7d8944c3783b3317c23b073e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:37 [async_llm.py:261] Added request cmpl-1cff7fd7d8944c3783b3317c23b073e2-0.
INFO 03-02 01:18:38 [logger.py:42] Received request cmpl-72fdc5be9abf4a6e80201c098dbfbf7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:38 [async_llm.py:261] Added request cmpl-72fdc5be9abf4a6e80201c098dbfbf7a-0.
INFO 03-02 01:18:39 [logger.py:42] Received request cmpl-9e9e0af6e54041f09d45e5e65f2e22ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:39 [async_llm.py:261] Added request cmpl-9e9e0af6e54041f09d45e5e65f2e22ce-0.
INFO 03-02 01:18:40 [logger.py:42] Received request cmpl-c924683fd6284e86b2dd206036bebb79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:40 [async_llm.py:261] Added request cmpl-c924683fd6284e86b2dd206036bebb79-0.
INFO 03-02 01:18:41 [logger.py:42] Received request cmpl-c06742451b7f47c5afabed8b87f3554b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:41 [async_llm.py:261] Added request cmpl-c06742451b7f47c5afabed8b87f3554b-0.
INFO 03-02 01:18:42 [logger.py:42] Received request cmpl-34199b43e5df4bffb489acbddcd7d1a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:42 [async_llm.py:261] Added request cmpl-34199b43e5df4bffb489acbddcd7d1a2-0.
INFO 03-02 01:18:43 [logger.py:42] Received request cmpl-28df8ce904fb442f8695356a45566ba8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:43 [async_llm.py:261] Added request cmpl-28df8ce904fb442f8695356a45566ba8-0.
INFO 03-02 01:18:44 [logger.py:42] Received request cmpl-324c60d4402e450fb117432f31e28964-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:44 [async_llm.py:261] Added request cmpl-324c60d4402e450fb117432f31e28964-0.
INFO 03-02 01:18:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:46 [logger.py:42] Received request cmpl-b7067f06cc594a4ea7617722fdcebc13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:46 [async_llm.py:261] Added request cmpl-b7067f06cc594a4ea7617722fdcebc13-0.
INFO 03-02 01:18:47 [logger.py:42] Received request cmpl-c644c834acaa44dc8f595a7c6881b41e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:47 [async_llm.py:261] Added request cmpl-c644c834acaa44dc8f595a7c6881b41e-0.
INFO 03-02 01:18:48 [logger.py:42] Received request cmpl-1b7967301f9a4e30adb880855a5cbe27-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:48 [async_llm.py:261] Added request cmpl-1b7967301f9a4e30adb880855a5cbe27-0.
INFO 03-02 01:18:49 [logger.py:42] Received request cmpl-82b88746a1254116a98def32c2780142-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:49 [async_llm.py:261] Added request cmpl-82b88746a1254116a98def32c2780142-0.
INFO 03-02 01:18:50 [logger.py:42] Received request cmpl-cbe89a52af024451a6acdffb1ca46adc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:50 [async_llm.py:261] Added request cmpl-cbe89a52af024451a6acdffb1ca46adc-0.
INFO 03-02 01:18:51 [logger.py:42] Received request cmpl-373d5901444d4a39a3e3ff93d3ef4b10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:51 [async_llm.py:261] Added request cmpl-373d5901444d4a39a3e3ff93d3ef4b10-0.
INFO 03-02 01:18:52 [logger.py:42] Received request cmpl-a45fab6ef32c4b7d944b448841f8b120-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:52 [async_llm.py:261] Added request cmpl-a45fab6ef32c4b7d944b448841f8b120-0.
INFO 03-02 01:18:53 [logger.py:42] Received request cmpl-e5dd1e1f1b6643aa82cb7980f0274291-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:53 [async_llm.py:261] Added request cmpl-e5dd1e1f1b6643aa82cb7980f0274291-0.
INFO 03-02 01:18:54 [logger.py:42] Received request cmpl-a5989da710c740269954b883ea892cc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:54 [async_llm.py:261] Added request cmpl-a5989da710c740269954b883ea892cc8-0.
INFO 03-02 01:18:55 [logger.py:42] Received request cmpl-ab92c4227fbc47ea8ee0abbf75341566-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:55 [async_llm.py:261] Added request cmpl-ab92c4227fbc47ea8ee0abbf75341566-0.
INFO 03-02 01:18:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:18:56 [logger.py:42] Received request cmpl-7fe71e6e03bf4d3b8eb1589ddb0f3e83-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:56 [async_llm.py:261] Added request cmpl-7fe71e6e03bf4d3b8eb1589ddb0f3e83-0.
INFO 03-02 01:18:58 [logger.py:42] Received request cmpl-9175cd43e2174c5f967069820d870d1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:58 [async_llm.py:261] Added request cmpl-9175cd43e2174c5f967069820d870d1f-0.
INFO 03-02 01:18:59 [logger.py:42] Received request cmpl-3c36a971a1144a1c840027ec39d3915d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:18:59 [async_llm.py:261] Added request cmpl-3c36a971a1144a1c840027ec39d3915d-0.
INFO 03-02 01:19:00 [logger.py:42] Received request cmpl-dfd1d129e6c24dc7af8dd03f9f79bb46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:00 [async_llm.py:261] Added request cmpl-dfd1d129e6c24dc7af8dd03f9f79bb46-0.
INFO 03-02 01:19:01 [logger.py:42] Received request cmpl-57338e69c7864446bd536fa91895d532-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:01 [async_llm.py:261] Added request cmpl-57338e69c7864446bd536fa91895d532-0.
INFO 03-02 01:19:02 [logger.py:42] Received request cmpl-92e918f70bee439d84002cb15f5f1953-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:02 [async_llm.py:261] Added request cmpl-92e918f70bee439d84002cb15f5f1953-0.
INFO 03-02 01:19:03 [logger.py:42] Received request cmpl-b4e56dd078a44a70a8cb09920d08f304-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:03 [async_llm.py:261] Added request cmpl-b4e56dd078a44a70a8cb09920d08f304-0.
INFO 03-02 01:19:04 [logger.py:42] Received request cmpl-cccb868e239841939aa8fdd776561908-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:04 [async_llm.py:261] Added request cmpl-cccb868e239841939aa8fdd776561908-0.
INFO 03-02 01:19:05 [logger.py:42] Received request cmpl-4e06a939a13d424583569cb4ba6a72c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:05 [async_llm.py:261] Added request cmpl-4e06a939a13d424583569cb4ba6a72c8-0.
INFO 03-02 01:19:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:06 [logger.py:42] Received request cmpl-6bab36fbb87547039c6d2f0223b97359-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:06 [async_llm.py:261] Added request cmpl-6bab36fbb87547039c6d2f0223b97359-0.
INFO 03-02 01:19:07 [logger.py:42] Received request cmpl-afd6f8a6980646fab5044d374bdd462c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:07 [async_llm.py:261] Added request cmpl-afd6f8a6980646fab5044d374bdd462c-0.
INFO 03-02 01:19:09 [logger.py:42] Received request cmpl-0a4ad4d5f0f740b1b1dee4c1a6baf97d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:09 [async_llm.py:261] Added request cmpl-0a4ad4d5f0f740b1b1dee4c1a6baf97d-0.
INFO 03-02 01:19:10 [logger.py:42] Received request cmpl-feb203ff23424fdead24525380215667-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:10 [async_llm.py:261] Added request cmpl-feb203ff23424fdead24525380215667-0.
INFO 03-02 01:19:11 [logger.py:42] Received request cmpl-d1b0019101fe483cb6b1f2dd5af390f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:11 [async_llm.py:261] Added request cmpl-d1b0019101fe483cb6b1f2dd5af390f4-0.
INFO 03-02 01:19:12 [logger.py:42] Received request cmpl-622f1335c47c496da8f91e4e59e208b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:12 [async_llm.py:261] Added request cmpl-622f1335c47c496da8f91e4e59e208b0-0.
INFO 03-02 01:19:13 [logger.py:42] Received request cmpl-f64e757f3b7c499ea8fc4b786bc59467-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:13 [async_llm.py:261] Added request cmpl-f64e757f3b7c499ea8fc4b786bc59467-0.
INFO 03-02 01:19:14 [logger.py:42] Received request cmpl-1c8949c509954757bcfa24d9fca31185-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:14 [async_llm.py:261] Added request cmpl-1c8949c509954757bcfa24d9fca31185-0.
INFO 03-02 01:19:15 [logger.py:42] Received request cmpl-51089fac954444d1b95de39e01877817-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:15 [async_llm.py:261] Added request cmpl-51089fac954444d1b95de39e01877817-0.
INFO 03-02 01:19:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:16 [logger.py:42] Received request cmpl-7150973b6e2d44aaa01904afb6882db2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:16 [async_llm.py:261] Added request cmpl-7150973b6e2d44aaa01904afb6882db2-0.
INFO 03-02 01:19:17 [logger.py:42] Received request cmpl-26d9b1ef71b043ad99316bb60c0a9d8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:17 [async_llm.py:261] Added request cmpl-26d9b1ef71b043ad99316bb60c0a9d8e-0.
INFO 03-02 01:19:18 [logger.py:42] Received request cmpl-d6ef906ed7c247b5a6f9eb990f10e899-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:18 [async_llm.py:261] Added request cmpl-d6ef906ed7c247b5a6f9eb990f10e899-0.
INFO 03-02 01:19:19 [logger.py:42] Received request cmpl-22fb84cbab374f438fd9e9fb3efb38ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:19 [async_llm.py:261] Added request cmpl-22fb84cbab374f438fd9e9fb3efb38ce-0.
INFO 03-02 01:19:21 [logger.py:42] Received request cmpl-f5c18e92aa8f400390847073950e257f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:21 [async_llm.py:261] Added request cmpl-f5c18e92aa8f400390847073950e257f-0.
INFO 03-02 01:19:22 [logger.py:42] Received request cmpl-027baf383ce841b886ca89ddefc5dbb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:22 [async_llm.py:261] Added request cmpl-027baf383ce841b886ca89ddefc5dbb6-0.
INFO 03-02 01:19:23 [logger.py:42] Received request cmpl-b28f62905c8b40d4bda0e9dbb63985e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:23 [async_llm.py:261] Added request cmpl-b28f62905c8b40d4bda0e9dbb63985e8-0.
INFO 03-02 01:19:24 [logger.py:42] Received request cmpl-198e30e737834db7975c84b761c4b1b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:24 [async_llm.py:261] Added request cmpl-198e30e737834db7975c84b761c4b1b0-0.
INFO 03-02 01:19:25 [logger.py:42] Received request cmpl-52280eed311b4203a7a7fff915e1289e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:25 [async_llm.py:261] Added request cmpl-52280eed311b4203a7a7fff915e1289e-0.
INFO 03-02 01:19:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:26 [logger.py:42] Received request cmpl-754908b393dd4c48a32d1367111bfd68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:26 [async_llm.py:261] Added request cmpl-754908b393dd4c48a32d1367111bfd68-0.
INFO 03-02 01:19:27 [logger.py:42] Received request cmpl-e269917630cd4b2091d7487e3ab00af3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:27 [async_llm.py:261] Added request cmpl-e269917630cd4b2091d7487e3ab00af3-0.
INFO 03-02 01:19:28 [logger.py:42] Received request cmpl-2e01289632e5482f99fc888558a0babc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:28 [async_llm.py:261] Added request cmpl-2e01289632e5482f99fc888558a0babc-0.
INFO 03-02 01:19:29 [logger.py:42] Received request cmpl-cf86a5527808453fbc3cf9a4b012601d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:29 [async_llm.py:261] Added request cmpl-cf86a5527808453fbc3cf9a4b012601d-0.
INFO 03-02 01:19:30 [logger.py:42] Received request cmpl-9c34f862121f4f16afba761f4248eca3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:30 [async_llm.py:261] Added request cmpl-9c34f862121f4f16afba761f4248eca3-0.
INFO 03-02 01:19:32 [logger.py:42] Received request cmpl-06b7c52eb2f448d4a5636b1f98bf772b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:32 [async_llm.py:261] Added request cmpl-06b7c52eb2f448d4a5636b1f98bf772b-0.
INFO 03-02 01:19:33 [logger.py:42] Received request cmpl-f782ee49da2f4a65a607ba5b0dbe9743-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:33 [async_llm.py:261] Added request cmpl-f782ee49da2f4a65a607ba5b0dbe9743-0.
INFO 03-02 01:19:34 [logger.py:42] Received request cmpl-82c43dd6a5624e3eb83f66a8b11518ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:34 [async_llm.py:261] Added request cmpl-82c43dd6a5624e3eb83f66a8b11518ce-0.
INFO 03-02 01:19:35 [logger.py:42] Received request cmpl-e421a981f26344c9be481c531c9e8f4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:35 [async_llm.py:261] Added request cmpl-e421a981f26344c9be481c531c9e8f4c-0.
INFO 03-02 01:19:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:36 [logger.py:42] Received request cmpl-bd9dbd6648544f3299fafe87a8dc46d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:36 [async_llm.py:261] Added request cmpl-bd9dbd6648544f3299fafe87a8dc46d9-0.
INFO 03-02 01:19:37 [logger.py:42] Received request cmpl-5cbc2847959b448fae407076b81c298d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:37 [async_llm.py:261] Added request cmpl-5cbc2847959b448fae407076b81c298d-0.
INFO 03-02 01:19:38 [logger.py:42] Received request cmpl-73c338584f6e44d4bc08b065dbd4f0eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:38 [async_llm.py:261] Added request cmpl-73c338584f6e44d4bc08b065dbd4f0eb-0.
INFO 03-02 01:19:39 [logger.py:42] Received request cmpl-aca1068562bc48baadbe3e6e3e23f0cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:39 [async_llm.py:261] Added request cmpl-aca1068562bc48baadbe3e6e3e23f0cf-0.
INFO 03-02 01:19:40 [logger.py:42] Received request cmpl-5881803e0fb54867bd3ab5ed8ddd9711-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:40 [async_llm.py:261] Added request cmpl-5881803e0fb54867bd3ab5ed8ddd9711-0.
INFO 03-02 01:19:41 [logger.py:42] Received request cmpl-18d019fc48084554a486c00560ce54bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:41 [async_llm.py:261] Added request cmpl-18d019fc48084554a486c00560ce54bd-0.
INFO 03-02 01:19:42 [logger.py:42] Received request cmpl-262b3bfe7364496691b03a7bd5b7450c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:42 [async_llm.py:261] Added request cmpl-262b3bfe7364496691b03a7bd5b7450c-0.
INFO 03-02 01:19:44 [logger.py:42] Received request cmpl-e27aee991e3c4a43b6363e644a5b8869-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:44 [async_llm.py:261] Added request cmpl-e27aee991e3c4a43b6363e644a5b8869-0.
INFO 03-02 01:19:45 [logger.py:42] Received request cmpl-5cfc357206234eb080cec14e6c59d9e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:45 [async_llm.py:261] Added request cmpl-5cfc357206234eb080cec14e6c59d9e9-0.
INFO 03-02 01:19:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:46 [logger.py:42] Received request cmpl-e07e3956a8ec4f9e8db0619c8f59c4fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:46 [async_llm.py:261] Added request cmpl-e07e3956a8ec4f9e8db0619c8f59c4fa-0.
INFO 03-02 01:19:47 [logger.py:42] Received request cmpl-df5a20cbb4ab44f2a8ae8b6e064f39d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:47 [async_llm.py:261] Added request cmpl-df5a20cbb4ab44f2a8ae8b6e064f39d8-0.
INFO 03-02 01:19:48 [logger.py:42] Received request cmpl-b4e947070b2f45538725b583b80c6634-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:48 [async_llm.py:261] Added request cmpl-b4e947070b2f45538725b583b80c6634-0.
INFO 03-02 01:19:49 [logger.py:42] Received request cmpl-a36cc92b6d3f434696be3ced98035382-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:49 [async_llm.py:261] Added request cmpl-a36cc92b6d3f434696be3ced98035382-0.
INFO 03-02 01:19:50 [logger.py:42] Received request cmpl-b091213137634241b56c303ae7b2b038-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:50 [async_llm.py:261] Added request cmpl-b091213137634241b56c303ae7b2b038-0.
INFO 03-02 01:19:51 [logger.py:42] Received request cmpl-96546c047dcb46ea987f22486d08e5f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:51 [async_llm.py:261] Added request cmpl-96546c047dcb46ea987f22486d08e5f3-0.
INFO 03-02 01:19:52 [logger.py:42] Received request cmpl-85e04c7179294d1d8e74a9f61b6d47fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:52 [async_llm.py:261] Added request cmpl-85e04c7179294d1d8e74a9f61b6d47fc-0.
INFO 03-02 01:19:53 [logger.py:42] Received request cmpl-30de141769a24c028eef8a78b090fb52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:53 [async_llm.py:261] Added request cmpl-30de141769a24c028eef8a78b090fb52-0.
INFO 03-02 01:19:55 [logger.py:42] Received request cmpl-e652b78540f34af382d305d5eda76f04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:55 [async_llm.py:261] Added request cmpl-e652b78540f34af382d305d5eda76f04-0.
INFO 03-02 01:19:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:19:56 [logger.py:42] Received request cmpl-190538827ba34fea8e2357a25a3cc0b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:56 [async_llm.py:261] Added request cmpl-190538827ba34fea8e2357a25a3cc0b4-0.
INFO 03-02 01:19:57 [logger.py:42] Received request cmpl-fc3625b1f82c43a783e30c8d839195ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:57 [async_llm.py:261] Added request cmpl-fc3625b1f82c43a783e30c8d839195ee-0.
INFO 03-02 01:19:58 [logger.py:42] Received request cmpl-e56289e3f44249cfbdb711bf14356478-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:58 [async_llm.py:261] Added request cmpl-e56289e3f44249cfbdb711bf14356478-0.
INFO 03-02 01:19:59 [logger.py:42] Received request cmpl-3b1326c1442745e48dec5a528159d499-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:19:59 [async_llm.py:261] Added request cmpl-3b1326c1442745e48dec5a528159d499-0.
INFO 03-02 01:20:00 [logger.py:42] Received request cmpl-f15aa2aa3ac7460baa18908454555b60-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:00 [async_llm.py:261] Added request cmpl-f15aa2aa3ac7460baa18908454555b60-0.
INFO 03-02 01:20:01 [logger.py:42] Received request cmpl-a9c54f188d7846faa015add864083068-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:01 [async_llm.py:261] Added request cmpl-a9c54f188d7846faa015add864083068-0.
INFO 03-02 01:20:02 [logger.py:42] Received request cmpl-60140c4ba7c241e4aedf4be5d225edbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:02 [async_llm.py:261] Added request cmpl-60140c4ba7c241e4aedf4be5d225edbb-0.
INFO 03-02 01:20:03 [logger.py:42] Received request cmpl-f91e1214efc14501ad105607a3b0134d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:03 [async_llm.py:261] Added request cmpl-f91e1214efc14501ad105607a3b0134d-0.
INFO 03-02 01:20:04 [logger.py:42] Received request cmpl-8ec46b0a6b6741018f5ba2e9c90c6199-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:04 [async_llm.py:261] Added request cmpl-8ec46b0a6b6741018f5ba2e9c90c6199-0.
INFO 03-02 01:20:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:05 [logger.py:42] Received request cmpl-fd90bcc0a1b54cb6ac8b25654a1f1c8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:05 [async_llm.py:261] Added request cmpl-fd90bcc0a1b54cb6ac8b25654a1f1c8c-0.
INFO 03-02 01:20:07 [logger.py:42] Received request cmpl-b3d759800d7e4426804a2b5689b18ba3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:07 [async_llm.py:261] Added request cmpl-b3d759800d7e4426804a2b5689b18ba3-0.
INFO 03-02 01:20:08 [logger.py:42] Received request cmpl-c7e2ba84d3f1485cae7d72778fc0f54f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:08 [async_llm.py:261] Added request cmpl-c7e2ba84d3f1485cae7d72778fc0f54f-0.
INFO 03-02 01:20:09 [logger.py:42] Received request cmpl-5ba8cf52a82c4c72a9775e4884422ae4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:09 [async_llm.py:261] Added request cmpl-5ba8cf52a82c4c72a9775e4884422ae4-0.
INFO 03-02 01:20:10 [logger.py:42] Received request cmpl-f18bf7ba88d041a2bc8226ba283a2d0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:10 [async_llm.py:261] Added request cmpl-f18bf7ba88d041a2bc8226ba283a2d0b-0.
INFO 03-02 01:20:11 [logger.py:42] Received request cmpl-c941ce7b59374047b6016241901e5f90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:11 [async_llm.py:261] Added request cmpl-c941ce7b59374047b6016241901e5f90-0.
INFO 03-02 01:20:12 [logger.py:42] Received request cmpl-71ff21ff8764499cb6e40245099d649b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:12 [async_llm.py:261] Added request cmpl-71ff21ff8764499cb6e40245099d649b-0.
INFO 03-02 01:20:13 [logger.py:42] Received request cmpl-ded12a43ba294ed084eef9259f80f6b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:13 [async_llm.py:261] Added request cmpl-ded12a43ba294ed084eef9259f80f6b5-0.
INFO 03-02 01:20:14 [logger.py:42] Received request cmpl-96b534d202994a6b9de9af8b324613bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:14 [async_llm.py:261] Added request cmpl-96b534d202994a6b9de9af8b324613bf-0.
INFO 03-02 01:20:15 [logger.py:42] Received request cmpl-9d37e63b31c941028366ea4e281030a0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:15 [async_llm.py:261] Added request cmpl-9d37e63b31c941028366ea4e281030a0-0.
INFO 03-02 01:20:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:16 [logger.py:42] Received request cmpl-bdaf2cc8050a4adeaba46bfb89e350b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:16 [async_llm.py:261] Added request cmpl-bdaf2cc8050a4adeaba46bfb89e350b6-0.
INFO 03-02 01:20:18 [logger.py:42] Received request cmpl-a2b132de3cbe4fcd98b1809c635a586a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:18 [async_llm.py:261] Added request cmpl-a2b132de3cbe4fcd98b1809c635a586a-0.
INFO 03-02 01:20:19 [logger.py:42] Received request cmpl-e62da5639f6443b5aa773d917ad3df1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:19 [async_llm.py:261] Added request cmpl-e62da5639f6443b5aa773d917ad3df1f-0.
INFO 03-02 01:20:20 [logger.py:42] Received request cmpl-ac8ecac2242a4430bc205c48d9ea382f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:20 [async_llm.py:261] Added request cmpl-ac8ecac2242a4430bc205c48d9ea382f-0.
INFO 03-02 01:20:21 [logger.py:42] Received request cmpl-5b3b394b1679495f9921d9d4bb8ceb99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:21 [async_llm.py:261] Added request cmpl-5b3b394b1679495f9921d9d4bb8ceb99-0.
INFO 03-02 01:20:22 [logger.py:42] Received request cmpl-e2c2b96e06a84c04b1257dbe2bd8d801-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:22 [async_llm.py:261] Added request cmpl-e2c2b96e06a84c04b1257dbe2bd8d801-0.
INFO 03-02 01:20:23 [logger.py:42] Received request cmpl-6da245df3ff94ce68427fe691a23f949-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:23 [async_llm.py:261] Added request cmpl-6da245df3ff94ce68427fe691a23f949-0.
INFO 03-02 01:20:24 [logger.py:42] Received request cmpl-286ba04915a1480e92365c412ef88233-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:24 [async_llm.py:261] Added request cmpl-286ba04915a1480e92365c412ef88233-0.
INFO 03-02 01:20:25 [logger.py:42] Received request cmpl-4e8e82d47b944d578f1ee76d773e670f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:25 [async_llm.py:261] Added request cmpl-4e8e82d47b944d578f1ee76d773e670f-0.
INFO 03-02 01:20:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:26 [logger.py:42] Received request cmpl-6d3afb2fdd5940eca0e51b33cb25c680-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:26 [async_llm.py:261] Added request cmpl-6d3afb2fdd5940eca0e51b33cb25c680-0.
INFO 03-02 01:20:27 [logger.py:42] Received request cmpl-564662a42fb14425be4613a28ee8dab4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:27 [async_llm.py:261] Added request cmpl-564662a42fb14425be4613a28ee8dab4-0.
INFO 03-02 01:20:29 [logger.py:42] Received request cmpl-415f1e703eac4cffb8dce448d88373d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:29 [async_llm.py:261] Added request cmpl-415f1e703eac4cffb8dce448d88373d4-0.
INFO 03-02 01:20:30 [logger.py:42] Received request cmpl-b76bd587ab414708b73cf076cec4726a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:30 [async_llm.py:261] Added request cmpl-b76bd587ab414708b73cf076cec4726a-0.
INFO 03-02 01:20:31 [logger.py:42] Received request cmpl-0f4199aed0964e628c2ff9757a113f0f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:31 [async_llm.py:261] Added request cmpl-0f4199aed0964e628c2ff9757a113f0f-0.
INFO 03-02 01:20:32 [logger.py:42] Received request cmpl-13e40819aad54448b5c2d78937642c9a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:32 [async_llm.py:261] Added request cmpl-13e40819aad54448b5c2d78937642c9a-0.
INFO 03-02 01:20:33 [logger.py:42] Received request cmpl-7a4a1c5d0de5422cbf769e4afb3d5509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:33 [async_llm.py:261] Added request cmpl-7a4a1c5d0de5422cbf769e4afb3d5509-0.
INFO 03-02 01:20:34 [logger.py:42] Received request cmpl-47be7235ecd94172a8995fa0dde54e77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:34 [async_llm.py:261] Added request cmpl-47be7235ecd94172a8995fa0dde54e77-0.
INFO 03-02 01:20:35 [logger.py:42] Received request cmpl-4385604dafdf4e46b6631adda390b6bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:35 [async_llm.py:261] Added request cmpl-4385604dafdf4e46b6631adda390b6bd-0.
INFO 03-02 01:20:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:36 [logger.py:42] Received request cmpl-82ccfb4e4609410db03b4d716dbd70ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:36 [async_llm.py:261] Added request cmpl-82ccfb4e4609410db03b4d716dbd70ed-0.
INFO 03-02 01:20:37 [logger.py:42] Received request cmpl-b42905959c2e4146a079b507f9d7de62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:37 [async_llm.py:261] Added request cmpl-b42905959c2e4146a079b507f9d7de62-0.
INFO 03-02 01:20:38 [logger.py:42] Received request cmpl-4886ced4b1744fdca768abaf883a2833-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:38 [async_llm.py:261] Added request cmpl-4886ced4b1744fdca768abaf883a2833-0.
INFO 03-02 01:20:39 [logger.py:42] Received request cmpl-cad9cd2881684c3e9d85ad80b690560d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:39 [async_llm.py:261] Added request cmpl-cad9cd2881684c3e9d85ad80b690560d-0.
INFO 03-02 01:20:41 [logger.py:42] Received request cmpl-75daa1ebaf7b42c6ba8b3d03c1181564-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:41 [async_llm.py:261] Added request cmpl-75daa1ebaf7b42c6ba8b3d03c1181564-0.
INFO 03-02 01:20:42 [logger.py:42] Received request cmpl-d239ace76f504c76bd2bf5664c796051-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:42 [async_llm.py:261] Added request cmpl-d239ace76f504c76bd2bf5664c796051-0.
INFO 03-02 01:20:43 [logger.py:42] Received request cmpl-880cdb26baf64b518757e992cfdbc1a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:43 [async_llm.py:261] Added request cmpl-880cdb26baf64b518757e992cfdbc1a3-0.
INFO 03-02 01:20:44 [logger.py:42] Received request cmpl-e21550fc526147db9e5e4b8b6261ae17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:44 [async_llm.py:261] Added request cmpl-e21550fc526147db9e5e4b8b6261ae17-0.
INFO 03-02 01:20:45 [logger.py:42] Received request cmpl-40106c4b59d640b6a3c0ece05cb988f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:45 [async_llm.py:261] Added request cmpl-40106c4b59d640b6a3c0ece05cb988f6-0.
INFO 03-02 01:20:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:46 [logger.py:42] Received request cmpl-840f2d0d21874c9d9eefd5439509080f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:46 [async_llm.py:261] Added request cmpl-840f2d0d21874c9d9eefd5439509080f-0.
INFO 03-02 01:20:47 [logger.py:42] Received request cmpl-4a87b735dd0e4eb6898d3fd04a99e214-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:47 [async_llm.py:261] Added request cmpl-4a87b735dd0e4eb6898d3fd04a99e214-0.
INFO 03-02 01:20:48 [logger.py:42] Received request cmpl-2975e56be912480b8076c747b0b922e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:48 [async_llm.py:261] Added request cmpl-2975e56be912480b8076c747b0b922e9-0.
INFO 03-02 01:20:49 [logger.py:42] Received request cmpl-cc44a43e92c14ccbab6fc770af54d19b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:49 [async_llm.py:261] Added request cmpl-cc44a43e92c14ccbab6fc770af54d19b-0.
INFO 03-02 01:20:50 [logger.py:42] Received request cmpl-ab472c34654e4634b94a4b2c5a00650a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:50 [async_llm.py:261] Added request cmpl-ab472c34654e4634b94a4b2c5a00650a-0.
INFO 03-02 01:20:52 [logger.py:42] Received request cmpl-2c62007605704a14ae27a765962e4a32-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:52 [async_llm.py:261] Added request cmpl-2c62007605704a14ae27a765962e4a32-0.
INFO 03-02 01:20:53 [logger.py:42] Received request cmpl-b871b39c89994c6e863b515a82021539-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:53 [async_llm.py:261] Added request cmpl-b871b39c89994c6e863b515a82021539-0.
INFO 03-02 01:20:54 [logger.py:42] Received request cmpl-1532fbbb70564b8b8b009b3415f54fd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:54 [async_llm.py:261] Added request cmpl-1532fbbb70564b8b8b009b3415f54fd1-0.
INFO 03-02 01:20:55 [logger.py:42] Received request cmpl-5710da5eb38649718c3a37e63c36ba90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:55 [async_llm.py:261] Added request cmpl-5710da5eb38649718c3a37e63c36ba90-0.
INFO 03-02 01:20:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:20:56 [logger.py:42] Received request cmpl-dd339f5461f74d6582f5ff916698a417-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:56 [async_llm.py:261] Added request cmpl-dd339f5461f74d6582f5ff916698a417-0.
INFO 03-02 01:20:57 [logger.py:42] Received request cmpl-02a7dda5f4f64da686516612d56c7294-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:57 [async_llm.py:261] Added request cmpl-02a7dda5f4f64da686516612d56c7294-0.
INFO 03-02 01:20:58 [logger.py:42] Received request cmpl-ea79f31d03be4671ad919377bc606e9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:58 [async_llm.py:261] Added request cmpl-ea79f31d03be4671ad919377bc606e9d-0.
INFO 03-02 01:20:59 [logger.py:42] Received request cmpl-b0fb96d8e17844d989bb8fefce16aa92-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:20:59 [async_llm.py:261] Added request cmpl-b0fb96d8e17844d989bb8fefce16aa92-0.
INFO 03-02 01:21:00 [logger.py:42] Received request cmpl-ab1d538f992b4c4797026f34d3d9c31a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:00 [async_llm.py:261] Added request cmpl-ab1d538f992b4c4797026f34d3d9c31a-0.
INFO 03-02 01:21:01 [logger.py:42] Received request cmpl-d2597a9971c84e9eb94d5b3c3e2851de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:01 [async_llm.py:261] Added request cmpl-d2597a9971c84e9eb94d5b3c3e2851de-0.
INFO 03-02 01:21:02 [logger.py:42] Received request cmpl-5084e724a9004fb298b9eb81e2682651-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:02 [async_llm.py:261] Added request cmpl-5084e724a9004fb298b9eb81e2682651-0.
INFO 03-02 01:21:04 [logger.py:42] Received request cmpl-a3dc0931c3624bf2bdde69302542a61a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:04 [async_llm.py:261] Added request cmpl-a3dc0931c3624bf2bdde69302542a61a-0.
INFO 03-02 01:21:05 [logger.py:42] Received request cmpl-f57962b17e384a838509ad69493c319b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:05 [async_llm.py:261] Added request cmpl-f57962b17e384a838509ad69493c319b-0.
INFO 03-02 01:21:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:06 [logger.py:42] Received request cmpl-be744cb96b374e7cb5e505552d79d4d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:06 [async_llm.py:261] Added request cmpl-be744cb96b374e7cb5e505552d79d4d0-0.
INFO 03-02 01:21:07 [logger.py:42] Received request cmpl-5d64b650ac0e461c83221ce184b9f17e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:07 [async_llm.py:261] Added request cmpl-5d64b650ac0e461c83221ce184b9f17e-0.
INFO 03-02 01:21:08 [logger.py:42] Received request cmpl-a37d8cab7c8b4c7bb93805e819327f57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:08 [async_llm.py:261] Added request cmpl-a37d8cab7c8b4c7bb93805e819327f57-0.
INFO 03-02 01:21:09 [logger.py:42] Received request cmpl-b1b39678355145f8b78ce7cfd6791261-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:09 [async_llm.py:261] Added request cmpl-b1b39678355145f8b78ce7cfd6791261-0.
INFO 03-02 01:21:10 [logger.py:42] Received request cmpl-24dfc86e98374568ba5e9e7176131698-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:10 [async_llm.py:261] Added request cmpl-24dfc86e98374568ba5e9e7176131698-0.
INFO 03-02 01:21:11 [logger.py:42] Received request cmpl-119ebec454a647568dd8a64d01df3a90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:11 [async_llm.py:261] Added request cmpl-119ebec454a647568dd8a64d01df3a90-0.
INFO 03-02 01:21:12 [logger.py:42] Received request cmpl-5e435d533f7342ab8657988138c190b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:12 [async_llm.py:261] Added request cmpl-5e435d533f7342ab8657988138c190b7-0.
INFO 03-02 01:21:13 [logger.py:42] Received request cmpl-ffca0c481fc54b5ab2ece416990f13ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:13 [async_llm.py:261] Added request cmpl-ffca0c481fc54b5ab2ece416990f13ed-0.
INFO 03-02 01:21:15 [logger.py:42] Received request cmpl-3814555181104327a2da5b2c373b0496-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:15 [async_llm.py:261] Added request cmpl-3814555181104327a2da5b2c373b0496-0.
INFO 03-02 01:21:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:16 [logger.py:42] Received request cmpl-8e14cdf256554357b8df72a2e411b6f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:16 [async_llm.py:261] Added request cmpl-8e14cdf256554357b8df72a2e411b6f7-0.
INFO 03-02 01:21:17 [logger.py:42] Received request cmpl-77b3b2c252a349c799c222b9e1c10986-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:17 [async_llm.py:261] Added request cmpl-77b3b2c252a349c799c222b9e1c10986-0.
INFO 03-02 01:21:18 [logger.py:42] Received request cmpl-2780a487b15242e1be34f85df44d3b37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:18 [async_llm.py:261] Added request cmpl-2780a487b15242e1be34f85df44d3b37-0.
INFO 03-02 01:21:19 [logger.py:42] Received request cmpl-a7193de2765f495687f10e2b58431490-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:19 [async_llm.py:261] Added request cmpl-a7193de2765f495687f10e2b58431490-0.
INFO 03-02 01:21:20 [logger.py:42] Received request cmpl-45424de426c147aaba8632e98245c0a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:20 [async_llm.py:261] Added request cmpl-45424de426c147aaba8632e98245c0a8-0.
INFO 03-02 01:21:21 [logger.py:42] Received request cmpl-87f3fa0386c1412a9ed13c4e7c7a2ff6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:21 [async_llm.py:261] Added request cmpl-87f3fa0386c1412a9ed13c4e7c7a2ff6-0.
INFO 03-02 01:21:22 [logger.py:42] Received request cmpl-e4bb571c7013473e8af0464bfd3b5aea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:22 [async_llm.py:261] Added request cmpl-e4bb571c7013473e8af0464bfd3b5aea-0.
INFO 03-02 01:21:23 [logger.py:42] Received request cmpl-ef66fa1254a74517a2181ccc5c9a84ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:23 [async_llm.py:261] Added request cmpl-ef66fa1254a74517a2181ccc5c9a84ca-0.
INFO 03-02 01:21:24 [logger.py:42] Received request cmpl-11d4b72f03d64d2b9f2456a8cf96b8ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:24 [async_llm.py:261] Added request cmpl-11d4b72f03d64d2b9f2456a8cf96b8ec-0.
INFO 03-02 01:21:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:26 [logger.py:42] Received request cmpl-4623214b0cea4ce186de32290dbef8a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:26 [async_llm.py:261] Added request cmpl-4623214b0cea4ce186de32290dbef8a9-0.
INFO 03-02 01:21:27 [logger.py:42] Received request cmpl-af54afca86224fcb843ba9539951724e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:27 [async_llm.py:261] Added request cmpl-af54afca86224fcb843ba9539951724e-0.
INFO 03-02 01:21:28 [logger.py:42] Received request cmpl-6f4cb80d6f8b475da383e5b9d14e544c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:28 [async_llm.py:261] Added request cmpl-6f4cb80d6f8b475da383e5b9d14e544c-0.
INFO 03-02 01:21:29 [logger.py:42] Received request cmpl-cdf31e8214f746a0a8d1ca065a3c088f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:29 [async_llm.py:261] Added request cmpl-cdf31e8214f746a0a8d1ca065a3c088f-0.
INFO 03-02 01:21:30 [logger.py:42] Received request cmpl-4de92cd150d14a889dba429a5bbaeecc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:30 [async_llm.py:261] Added request cmpl-4de92cd150d14a889dba429a5bbaeecc-0.
INFO 03-02 01:21:31 [logger.py:42] Received request cmpl-bf237d71b0b14e198da9a343b2566997-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:31 [async_llm.py:261] Added request cmpl-bf237d71b0b14e198da9a343b2566997-0.
INFO 03-02 01:21:32 [logger.py:42] Received request cmpl-ab0ae7fb85914d13a2c00b5c48ffee6c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:32 [async_llm.py:261] Added request cmpl-ab0ae7fb85914d13a2c00b5c48ffee6c-0.
INFO 03-02 01:21:33 [logger.py:42] Received request cmpl-31010b66b8d14692a7fb3343e9cd7ff3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:33 [async_llm.py:261] Added request cmpl-31010b66b8d14692a7fb3343e9cd7ff3-0.
INFO 03-02 01:21:34 [logger.py:42] Received request cmpl-5819fdb1e2b54cc1b54ac12e518e199f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:34 [async_llm.py:261] Added request cmpl-5819fdb1e2b54cc1b54ac12e518e199f-0.
INFO 03-02 01:21:35 [logger.py:42] Received request cmpl-64cb2964297d4596a80fad78bd4bdbba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:35 [async_llm.py:261] Added request cmpl-64cb2964297d4596a80fad78bd4bdbba-0.
INFO 03-02 01:21:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:36 [logger.py:42] Received request cmpl-fb952749ba374afb9ba7363c8c58bdae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:36 [async_llm.py:261] Added request cmpl-fb952749ba374afb9ba7363c8c58bdae-0.
INFO 03-02 01:21:38 [logger.py:42] Received request cmpl-bf98e53f87c444ccb4bdff08ee54d5c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:38 [async_llm.py:261] Added request cmpl-bf98e53f87c444ccb4bdff08ee54d5c8-0.
INFO 03-02 01:21:39 [logger.py:42] Received request cmpl-993991f55ed248ea90174abff787d964-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:39 [async_llm.py:261] Added request cmpl-993991f55ed248ea90174abff787d964-0.
INFO 03-02 01:21:40 [logger.py:42] Received request cmpl-b2c2419c61474b7381f889da89be9c71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:40 [async_llm.py:261] Added request cmpl-b2c2419c61474b7381f889da89be9c71-0.
INFO 03-02 01:21:41 [logger.py:42] Received request cmpl-cc96900cfbb34d4c96528acda2f08ce3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:41 [async_llm.py:261] Added request cmpl-cc96900cfbb34d4c96528acda2f08ce3-0.
INFO 03-02 01:21:42 [logger.py:42] Received request cmpl-203b47e75b40423fb77e28b085c5d8aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:42 [async_llm.py:261] Added request cmpl-203b47e75b40423fb77e28b085c5d8aa-0.
INFO 03-02 01:21:43 [logger.py:42] Received request cmpl-39d18875166f4c4ebecc24a942200173-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:43 [async_llm.py:261] Added request cmpl-39d18875166f4c4ebecc24a942200173-0.
INFO 03-02 01:21:44 [logger.py:42] Received request cmpl-9cad13f34a4d4920a3a345131d3d91bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:44 [async_llm.py:261] Added request cmpl-9cad13f34a4d4920a3a345131d3d91bf-0.
INFO 03-02 01:21:45 [logger.py:42] Received request cmpl-498ba41a923847089da49fb081218e4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:45 [async_llm.py:261] Added request cmpl-498ba41a923847089da49fb081218e4b-0.
INFO 03-02 01:21:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:46 [logger.py:42] Received request cmpl-44f73ee8be9d4dd18de0b90e872b0985-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:46 [async_llm.py:261] Added request cmpl-44f73ee8be9d4dd18de0b90e872b0985-0.
INFO 03-02 01:21:47 [logger.py:42] Received request cmpl-76ef7f440fba470a8aa2d878bc5b64f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:47 [async_llm.py:261] Added request cmpl-76ef7f440fba470a8aa2d878bc5b64f8-0.
INFO 03-02 01:21:49 [logger.py:42] Received request cmpl-0691381a1d0e4f8e9eac6ab8c9114b9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:49 [async_llm.py:261] Added request cmpl-0691381a1d0e4f8e9eac6ab8c9114b9b-0.
INFO 03-02 01:21:50 [logger.py:42] Received request cmpl-a86a48ef83664531bb262b7b2401adce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:50 [async_llm.py:261] Added request cmpl-a86a48ef83664531bb262b7b2401adce-0.
INFO 03-02 01:21:51 [logger.py:42] Received request cmpl-b8ff1a18ab0e4670b28361fa7a9d6295-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:51 [async_llm.py:261] Added request cmpl-b8ff1a18ab0e4670b28361fa7a9d6295-0.
INFO 03-02 01:21:52 [logger.py:42] Received request cmpl-864acf5eabd846d593ad2d22bc46200b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:52 [async_llm.py:261] Added request cmpl-864acf5eabd846d593ad2d22bc46200b-0.
INFO 03-02 01:21:53 [logger.py:42] Received request cmpl-2e97fdafff42496aa76262f4fe1a810f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:53 [async_llm.py:261] Added request cmpl-2e97fdafff42496aa76262f4fe1a810f-0.
INFO 03-02 01:21:54 [logger.py:42] Received request cmpl-6f9c9a6af7a44a4ea0a9310c670b86d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:54 [async_llm.py:261] Added request cmpl-6f9c9a6af7a44a4ea0a9310c670b86d9-0.
INFO 03-02 01:21:55 [logger.py:42] Received request cmpl-2ae4b06fde9b4ccd8385cc17c6ebf919-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:55 [async_llm.py:261] Added request cmpl-2ae4b06fde9b4ccd8385cc17c6ebf919-0.
INFO 03-02 01:21:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:21:56 [logger.py:42] Received request cmpl-b937e3ba5b004ed38eb18e33fa2ece9d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:56 [async_llm.py:261] Added request cmpl-b937e3ba5b004ed38eb18e33fa2ece9d-0.
INFO 03-02 01:21:57 [logger.py:42] Received request cmpl-33f4d723013e48f38b06112eaddd0dfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:57 [async_llm.py:261] Added request cmpl-33f4d723013e48f38b06112eaddd0dfe-0.
INFO 03-02 01:21:58 [logger.py:42] Received request cmpl-316f0310d42c43afb738450c82c2592c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:58 [async_llm.py:261] Added request cmpl-316f0310d42c43afb738450c82c2592c-0.
INFO 03-02 01:21:59 [logger.py:42] Received request cmpl-1acb415f2109435d9890da8b7c4825cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:21:59 [async_llm.py:261] Added request cmpl-1acb415f2109435d9890da8b7c4825cb-0.
INFO 03-02 01:22:01 [logger.py:42] Received request cmpl-5b6222e411514203808dd8555ab36ec8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:01 [async_llm.py:261] Added request cmpl-5b6222e411514203808dd8555ab36ec8-0.
INFO 03-02 01:22:02 [logger.py:42] Received request cmpl-9092b4bd3c51496c8e1c83014155e54c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:02 [async_llm.py:261] Added request cmpl-9092b4bd3c51496c8e1c83014155e54c-0.
INFO 03-02 01:22:03 [logger.py:42] Received request cmpl-7bccdb437fa84951a9e9033ab234e186-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:03 [async_llm.py:261] Added request cmpl-7bccdb437fa84951a9e9033ab234e186-0.
INFO 03-02 01:22:04 [logger.py:42] Received request cmpl-e1fc3f7254e94e6eaea015155801c4e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:04 [async_llm.py:261] Added request cmpl-e1fc3f7254e94e6eaea015155801c4e7-0.
INFO 03-02 01:22:05 [logger.py:42] Received request cmpl-5887e0deb5e04d58ba4789221eb79783-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:05 [async_llm.py:261] Added request cmpl-5887e0deb5e04d58ba4789221eb79783-0.
INFO 03-02 01:22:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:06 [logger.py:42] Received request cmpl-ec43d2f97888463ea14f7129a4416e97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:06 [async_llm.py:261] Added request cmpl-ec43d2f97888463ea14f7129a4416e97-0.
INFO 03-02 01:22:07 [logger.py:42] Received request cmpl-df74c94f48ef4569bf017339ae7eb80e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:07 [async_llm.py:261] Added request cmpl-df74c94f48ef4569bf017339ae7eb80e-0.
INFO 03-02 01:22:08 [logger.py:42] Received request cmpl-ac38b0c1e54f4ab3a51e2eb24f9b3137-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:08 [async_llm.py:261] Added request cmpl-ac38b0c1e54f4ab3a51e2eb24f9b3137-0.
INFO 03-02 01:22:09 [logger.py:42] Received request cmpl-26d3d36b37de4f039e6ebf396d695c9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:09 [async_llm.py:261] Added request cmpl-26d3d36b37de4f039e6ebf396d695c9c-0.
INFO 03-02 01:22:10 [logger.py:42] Received request cmpl-5c05e9124948404eb5ec6f2136d5912b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:10 [async_llm.py:261] Added request cmpl-5c05e9124948404eb5ec6f2136d5912b-0.
INFO 03-02 01:22:12 [logger.py:42] Received request cmpl-bc1f8acbf93d4172a4746f3c6ed26b54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:12 [async_llm.py:261] Added request cmpl-bc1f8acbf93d4172a4746f3c6ed26b54-0.
INFO 03-02 01:22:13 [logger.py:42] Received request cmpl-59fd35baff5f418cb6502d002350893f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:13 [async_llm.py:261] Added request cmpl-59fd35baff5f418cb6502d002350893f-0.
INFO 03-02 01:22:14 [logger.py:42] Received request cmpl-689222537a2e49a0b2d69bf4c67b5794-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:14 [async_llm.py:261] Added request cmpl-689222537a2e49a0b2d69bf4c67b5794-0.
INFO 03-02 01:22:15 [logger.py:42] Received request cmpl-95bcda65640242718f8c386d21d7fd17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:15 [async_llm.py:261] Added request cmpl-95bcda65640242718f8c386d21d7fd17-0.
INFO 03-02 01:22:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:16 [logger.py:42] Received request cmpl-b8c357b4f1754eee949e7e0149eecd1f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:16 [async_llm.py:261] Added request cmpl-b8c357b4f1754eee949e7e0149eecd1f-0.
INFO 03-02 01:22:17 [logger.py:42] Received request cmpl-e095dca4b79e4b6b8e5f97cfa11ee509-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:17 [async_llm.py:261] Added request cmpl-e095dca4b79e4b6b8e5f97cfa11ee509-0.
INFO 03-02 01:22:18 [logger.py:42] Received request cmpl-e8f710727dde40388bb5aaecc3e5bdb0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:18 [async_llm.py:261] Added request cmpl-e8f710727dde40388bb5aaecc3e5bdb0-0.
INFO 03-02 01:22:19 [logger.py:42] Received request cmpl-783cfddc568f4790b275e949c349448b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:19 [async_llm.py:261] Added request cmpl-783cfddc568f4790b275e949c349448b-0.
INFO 03-02 01:22:20 [logger.py:42] Received request cmpl-697d9c2721264cb6915735cae1c7bde8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:20 [async_llm.py:261] Added request cmpl-697d9c2721264cb6915735cae1c7bde8-0.
INFO 03-02 01:22:21 [logger.py:42] Received request cmpl-bc6e949163fa4cde88be9d4f24467b51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:21 [async_llm.py:261] Added request cmpl-bc6e949163fa4cde88be9d4f24467b51-0.
INFO 03-02 01:22:23 [logger.py:42] Received request cmpl-eff50430c2a14f7bb124519e0514b66c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:23 [async_llm.py:261] Added request cmpl-eff50430c2a14f7bb124519e0514b66c-0.
INFO 03-02 01:22:24 [logger.py:42] Received request cmpl-72530d49e50d45bf9346f5f606ef245f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:24 [async_llm.py:261] Added request cmpl-72530d49e50d45bf9346f5f606ef245f-0.
INFO 03-02 01:22:25 [logger.py:42] Received request cmpl-6e5c90646380499aa826b7a183b12581-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:25 [async_llm.py:261] Added request cmpl-6e5c90646380499aa826b7a183b12581-0.
INFO 03-02 01:22:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:26 [logger.py:42] Received request cmpl-a1261caeeba1425db62465d3d4ed1278-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:26 [async_llm.py:261] Added request cmpl-a1261caeeba1425db62465d3d4ed1278-0.
INFO 03-02 01:22:27 [logger.py:42] Received request cmpl-5302561a005f42a694e4b9c0386ecf99-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:27 [async_llm.py:261] Added request cmpl-5302561a005f42a694e4b9c0386ecf99-0.
INFO 03-02 01:22:28 [logger.py:42] Received request cmpl-2e9d0dc462ef4537bed2e674568292e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:28 [async_llm.py:261] Added request cmpl-2e9d0dc462ef4537bed2e674568292e2-0.
INFO 03-02 01:22:29 [logger.py:42] Received request cmpl-b82889458d9148efb86bcc51c203ea2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:29 [async_llm.py:261] Added request cmpl-b82889458d9148efb86bcc51c203ea2f-0.
INFO 03-02 01:22:30 [logger.py:42] Received request cmpl-04a94a42e2aa4f4dbe775f0bdb1c070b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:30 [async_llm.py:261] Added request cmpl-04a94a42e2aa4f4dbe775f0bdb1c070b-0.
INFO 03-02 01:22:31 [logger.py:42] Received request cmpl-fbafaaea9854439f8ba4a46ad73523c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:31 [async_llm.py:261] Added request cmpl-fbafaaea9854439f8ba4a46ad73523c6-0.
INFO 03-02 01:22:32 [logger.py:42] Received request cmpl-ce95f9f211d745cbb5cbde36ab4853ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:32 [async_llm.py:261] Added request cmpl-ce95f9f211d745cbb5cbde36ab4853ed-0.
INFO 03-02 01:22:33 [logger.py:42] Received request cmpl-1629f6901b764939bdeead6ed9eb8d24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:33 [async_llm.py:261] Added request cmpl-1629f6901b764939bdeead6ed9eb8d24-0.
INFO 03-02 01:22:35 [logger.py:42] Received request cmpl-46952c6589bf4bf698bfee93a7bc5041-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:35 [async_llm.py:261] Added request cmpl-46952c6589bf4bf698bfee93a7bc5041-0.
INFO 03-02 01:22:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:36 [logger.py:42] Received request cmpl-5dc7bb7c86b44130b5f2431ac29b2793-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:36 [async_llm.py:261] Added request cmpl-5dc7bb7c86b44130b5f2431ac29b2793-0.
INFO 03-02 01:22:37 [logger.py:42] Received request cmpl-2e2aa72c38ba42e99952f97e49ddc504-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:37 [async_llm.py:261] Added request cmpl-2e2aa72c38ba42e99952f97e49ddc504-0.
INFO 03-02 01:22:38 [logger.py:42] Received request cmpl-51a8a886d6704f8890ed4e154bb4a724-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:38 [async_llm.py:261] Added request cmpl-51a8a886d6704f8890ed4e154bb4a724-0.
INFO 03-02 01:22:39 [logger.py:42] Received request cmpl-11fe09e2a282437483df7a6162f678f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:39 [async_llm.py:261] Added request cmpl-11fe09e2a282437483df7a6162f678f2-0.
INFO 03-02 01:22:40 [logger.py:42] Received request cmpl-c9b51e3354db46f3aa2da24534f6ad77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:40 [async_llm.py:261] Added request cmpl-c9b51e3354db46f3aa2da24534f6ad77-0.
INFO 03-02 01:22:41 [logger.py:42] Received request cmpl-63caa0856b8c4814888e5169597c108e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:41 [async_llm.py:261] Added request cmpl-63caa0856b8c4814888e5169597c108e-0.
INFO 03-02 01:22:42 [logger.py:42] Received request cmpl-c1aa2246df10415ca2d32a4f2115e0a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:42 [async_llm.py:261] Added request cmpl-c1aa2246df10415ca2d32a4f2115e0a4-0.
INFO 03-02 01:22:43 [logger.py:42] Received request cmpl-e9efcb35afd84630816250a89fc1e7da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:43 [async_llm.py:261] Added request cmpl-e9efcb35afd84630816250a89fc1e7da-0.
INFO 03-02 01:22:44 [logger.py:42] Received request cmpl-6ac17bed270043a9a3c4433ba68383e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:44 [async_llm.py:261] Added request cmpl-6ac17bed270043a9a3c4433ba68383e0-0.
INFO 03-02 01:22:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:46 [logger.py:42] Received request cmpl-95fea51eccd04856990f7c9f32e07cb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:46 [async_llm.py:261] Added request cmpl-95fea51eccd04856990f7c9f32e07cb9-0.
INFO 03-02 01:22:47 [logger.py:42] Received request cmpl-a7cf0927302e4e23912ecc19ead1b42e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:47 [async_llm.py:261] Added request cmpl-a7cf0927302e4e23912ecc19ead1b42e-0.
INFO 03-02 01:22:48 [logger.py:42] Received request cmpl-9c237752b1c34b198e49edd3305b52d7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:48 [async_llm.py:261] Added request cmpl-9c237752b1c34b198e49edd3305b52d7-0.
INFO 03-02 01:22:49 [logger.py:42] Received request cmpl-865d2e8ffef44d7e90824ab12abd6b90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:49 [async_llm.py:261] Added request cmpl-865d2e8ffef44d7e90824ab12abd6b90-0.
INFO 03-02 01:22:50 [logger.py:42] Received request cmpl-df090ab2c921452694e66ba2c19923bb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:50 [async_llm.py:261] Added request cmpl-df090ab2c921452694e66ba2c19923bb-0.
INFO 03-02 01:22:51 [logger.py:42] Received request cmpl-0d09586a35384a47a79beee30c441d76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:51 [async_llm.py:261] Added request cmpl-0d09586a35384a47a79beee30c441d76-0.
INFO 03-02 01:22:52 [logger.py:42] Received request cmpl-315ce14a98fb428f91d200948d3a199a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:52 [async_llm.py:261] Added request cmpl-315ce14a98fb428f91d200948d3a199a-0.
INFO 03-02 01:22:53 [logger.py:42] Received request cmpl-ba52caa8019647e998e03c8f3d5ba34d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:53 [async_llm.py:261] Added request cmpl-ba52caa8019647e998e03c8f3d5ba34d-0.
INFO 03-02 01:22:54 [logger.py:42] Received request cmpl-f1152a962d7f4d1e806cb24e5c0c8c94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:54 [async_llm.py:261] Added request cmpl-f1152a962d7f4d1e806cb24e5c0c8c94-0.
INFO 03-02 01:22:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:22:55 [logger.py:42] Received request cmpl-e4baed89e4a14ea2b0397b43d43e00c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:55 [async_llm.py:261] Added request cmpl-e4baed89e4a14ea2b0397b43d43e00c4-0.
INFO 03-02 01:22:56 [logger.py:42] Received request cmpl-e2fa2b902b9d4943b128f1f7ec5cdc64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:56 [async_llm.py:261] Added request cmpl-e2fa2b902b9d4943b128f1f7ec5cdc64-0.
INFO 03-02 01:22:58 [logger.py:42] Received request cmpl-fd8e7b6e864f4ca1af1b91b75b6b4a7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:58 [async_llm.py:261] Added request cmpl-fd8e7b6e864f4ca1af1b91b75b6b4a7e-0.
INFO 03-02 01:22:59 [logger.py:42] Received request cmpl-c06e3d3fb46c4e458a177671f8e805bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:22:59 [async_llm.py:261] Added request cmpl-c06e3d3fb46c4e458a177671f8e805bf-0.
INFO 03-02 01:23:00 [logger.py:42] Received request cmpl-480ab610dcd24246aa0020dbe5d9271d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:00 [async_llm.py:261] Added request cmpl-480ab610dcd24246aa0020dbe5d9271d-0.
INFO 03-02 01:23:01 [logger.py:42] Received request cmpl-01598c8058904fafac66056d24332b7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:01 [async_llm.py:261] Added request cmpl-01598c8058904fafac66056d24332b7a-0.
INFO 03-02 01:23:02 [logger.py:42] Received request cmpl-9292378cb8424294a75a3bd249a1d33b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:02 [async_llm.py:261] Added request cmpl-9292378cb8424294a75a3bd249a1d33b-0.
INFO 03-02 01:23:03 [logger.py:42] Received request cmpl-100280282a734222aa527fc20287d3fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:03 [async_llm.py:261] Added request cmpl-100280282a734222aa527fc20287d3fe-0.
INFO 03-02 01:23:04 [logger.py:42] Received request cmpl-12a84b300a0c4097830ab0afcab2d061-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:04 [async_llm.py:261] Added request cmpl-12a84b300a0c4097830ab0afcab2d061-0.
INFO 03-02 01:23:05 [logger.py:42] Received request cmpl-f029de7b86964583acd86b4437f10293-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:05 [async_llm.py:261] Added request cmpl-f029de7b86964583acd86b4437f10293-0.
INFO 03-02 01:23:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:06 [logger.py:42] Received request cmpl-8c627271e79c4f7190b0d3f95a47e292-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:06 [async_llm.py:261] Added request cmpl-8c627271e79c4f7190b0d3f95a47e292-0.
INFO 03-02 01:23:07 [logger.py:42] Received request cmpl-0e7ab2d2d34044fe9fcf2ddf489b0c55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:07 [async_llm.py:261] Added request cmpl-0e7ab2d2d34044fe9fcf2ddf489b0c55-0.
INFO 03-02 01:23:09 [logger.py:42] Received request cmpl-1307e9531e63437da3a01786fdb09165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:09 [async_llm.py:261] Added request cmpl-1307e9531e63437da3a01786fdb09165-0.
INFO 03-02 01:23:10 [logger.py:42] Received request cmpl-f9413683457b49ae8c2f7fff1d04a68f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:10 [async_llm.py:261] Added request cmpl-f9413683457b49ae8c2f7fff1d04a68f-0.
INFO 03-02 01:23:11 [logger.py:42] Received request cmpl-00dd956e245846c39360be666170c610-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:11 [async_llm.py:261] Added request cmpl-00dd956e245846c39360be666170c610-0.
INFO 03-02 01:23:12 [logger.py:42] Received request cmpl-079d2614c16a44099e920b7290c2ed1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:12 [async_llm.py:261] Added request cmpl-079d2614c16a44099e920b7290c2ed1b-0.
INFO 03-02 01:23:13 [logger.py:42] Received request cmpl-9e8574e4a2be46449bee234a7820def6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:13 [async_llm.py:261] Added request cmpl-9e8574e4a2be46449bee234a7820def6-0.
INFO 03-02 01:23:14 [logger.py:42] Received request cmpl-0e6a9b781445447fadb779d88ba56606-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:14 [async_llm.py:261] Added request cmpl-0e6a9b781445447fadb779d88ba56606-0.
INFO 03-02 01:23:15 [logger.py:42] Received request cmpl-b50ac0ab705742c9b7a4df8bc6c50cc2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:15 [async_llm.py:261] Added request cmpl-b50ac0ab705742c9b7a4df8bc6c50cc2-0.
INFO 03-02 01:23:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:16 [logger.py:42] Received request cmpl-f093ba8e20ea45e4bf75e4ca6eb00e87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:16 [async_llm.py:261] Added request cmpl-f093ba8e20ea45e4bf75e4ca6eb00e87-0.
INFO 03-02 01:23:17 [logger.py:42] Received request cmpl-5bfcac78812a451983fc6ec9a51aac53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:17 [async_llm.py:261] Added request cmpl-5bfcac78812a451983fc6ec9a51aac53-0.
INFO 03-02 01:23:18 [logger.py:42] Received request cmpl-8b416bcc923e4b45be47164c41c9235d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:18 [async_llm.py:261] Added request cmpl-8b416bcc923e4b45be47164c41c9235d-0.
INFO 03-02 01:23:20 [logger.py:42] Received request cmpl-700b1655a3284d7da9bf31adb31e14b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:20 [async_llm.py:261] Added request cmpl-700b1655a3284d7da9bf31adb31e14b7-0.
INFO 03-02 01:23:21 [logger.py:42] Received request cmpl-66a1b8898549425881971dc7d23bc1d5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:21 [async_llm.py:261] Added request cmpl-66a1b8898549425881971dc7d23bc1d5-0.
INFO 03-02 01:23:22 [logger.py:42] Received request cmpl-715e3ad2ebe3417caac2b80f4dbe7c7d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:22 [async_llm.py:261] Added request cmpl-715e3ad2ebe3417caac2b80f4dbe7c7d-0.
INFO 03-02 01:23:23 [logger.py:42] Received request cmpl-0142fd3125b2430cb6e1ff91d05dde76-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:23 [async_llm.py:261] Added request cmpl-0142fd3125b2430cb6e1ff91d05dde76-0.
INFO 03-02 01:23:24 [logger.py:42] Received request cmpl-a57685c46cda48e8b9d5474fd5b0595c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:24 [async_llm.py:261] Added request cmpl-a57685c46cda48e8b9d5474fd5b0595c-0.
INFO 03-02 01:23:25 [logger.py:42] Received request cmpl-91c9ff85201b498cb7147c43e08c3377-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:25 [async_llm.py:261] Added request cmpl-91c9ff85201b498cb7147c43e08c3377-0.
INFO 03-02 01:23:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:26 [logger.py:42] Received request cmpl-416614cb83784dfabd4584e139bca2ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:26 [async_llm.py:261] Added request cmpl-416614cb83784dfabd4584e139bca2ae-0.
INFO 03-02 01:23:27 [logger.py:42] Received request cmpl-9ee0da40738845d99738e51028ba677d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:27 [async_llm.py:261] Added request cmpl-9ee0da40738845d99738e51028ba677d-0.
INFO 03-02 01:23:28 [logger.py:42] Received request cmpl-083bbd68f7a24cb78b214a6cd7539ad5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:28 [async_llm.py:261] Added request cmpl-083bbd68f7a24cb78b214a6cd7539ad5-0.
INFO 03-02 01:23:29 [logger.py:42] Received request cmpl-a1339e6ccd5442c99b74e23ab7036b4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:29 [async_llm.py:261] Added request cmpl-a1339e6ccd5442c99b74e23ab7036b4c-0.
INFO 03-02 01:23:30 [logger.py:42] Received request cmpl-293de54cd81c465fac29858f9ed12165-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:30 [async_llm.py:261] Added request cmpl-293de54cd81c465fac29858f9ed12165-0.
INFO 03-02 01:23:32 [logger.py:42] Received request cmpl-6fdd9ad2de1443edac33e2a39d6a229d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:32 [async_llm.py:261] Added request cmpl-6fdd9ad2de1443edac33e2a39d6a229d-0.
INFO 03-02 01:23:33 [logger.py:42] Received request cmpl-9dd70ffe9e0440d1b62176600c70502f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:33 [async_llm.py:261] Added request cmpl-9dd70ffe9e0440d1b62176600c70502f-0.
INFO 03-02 01:23:34 [logger.py:42] Received request cmpl-4937fdeaefce492f88533ff004d7d99c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:34 [async_llm.py:261] Added request cmpl-4937fdeaefce492f88533ff004d7d99c-0.
INFO 03-02 01:23:35 [logger.py:42] Received request cmpl-4cc65fa7fd3849c7baf64751cc66a3b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:35 [async_llm.py:261] Added request cmpl-4cc65fa7fd3849c7baf64751cc66a3b3-0.
INFO 03-02 01:23:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:36 [logger.py:42] Received request cmpl-072c562cb55146a698236a135e96f178-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:36 [async_llm.py:261] Added request cmpl-072c562cb55146a698236a135e96f178-0.
INFO 03-02 01:23:37 [logger.py:42] Received request cmpl-ab306e733c1147719a7d499388658233-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:37 [async_llm.py:261] Added request cmpl-ab306e733c1147719a7d499388658233-0.
INFO 03-02 01:23:38 [logger.py:42] Received request cmpl-5d3073b3af0d4466b14589c3e908cdd1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:38 [async_llm.py:261] Added request cmpl-5d3073b3af0d4466b14589c3e908cdd1-0.
INFO 03-02 01:23:39 [logger.py:42] Received request cmpl-9645123356b24211881175dc184061ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:39 [async_llm.py:261] Added request cmpl-9645123356b24211881175dc184061ff-0.
INFO 03-02 01:23:40 [logger.py:42] Received request cmpl-a750e4b98a2b41ca84f09967778a4985-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:40 [async_llm.py:261] Added request cmpl-a750e4b98a2b41ca84f09967778a4985-0.
INFO 03-02 01:23:41 [logger.py:42] Received request cmpl-40248105668747ad8a1be7384c2d121e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:41 [async_llm.py:261] Added request cmpl-40248105668747ad8a1be7384c2d121e-0.
INFO 03-02 01:23:43 [logger.py:42] Received request cmpl-d6f16eec26d24328865af290311d7d94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:43 [async_llm.py:261] Added request cmpl-d6f16eec26d24328865af290311d7d94-0.
INFO 03-02 01:23:44 [logger.py:42] Received request cmpl-d7faf9237b624eeb8901163f807a847a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:44 [async_llm.py:261] Added request cmpl-d7faf9237b624eeb8901163f807a847a-0.
INFO 03-02 01:23:45 [logger.py:42] Received request cmpl-766166465a8c4d27888e8890b3d2b13f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:45 [async_llm.py:261] Added request cmpl-766166465a8c4d27888e8890b3d2b13f-0.
INFO 03-02 01:23:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:46 [logger.py:42] Received request cmpl-eb840d2b4d7b4457942c3c651c074d84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:46 [async_llm.py:261] Added request cmpl-eb840d2b4d7b4457942c3c651c074d84-0.
INFO 03-02 01:23:47 [logger.py:42] Received request cmpl-6930eacb4f7a452b966f6f0f8f99e064-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:47 [async_llm.py:261] Added request cmpl-6930eacb4f7a452b966f6f0f8f99e064-0.
INFO 03-02 01:23:48 [logger.py:42] Received request cmpl-8dea6b2b08b546148a2c3b0980af36a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:48 [async_llm.py:261] Added request cmpl-8dea6b2b08b546148a2c3b0980af36a7-0.
INFO 03-02 01:23:49 [logger.py:42] Received request cmpl-3a46a1ed49fd483fbce790776900aaf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:49 [async_llm.py:261] Added request cmpl-3a46a1ed49fd483fbce790776900aaf7-0.
INFO 03-02 01:23:50 [logger.py:42] Received request cmpl-bd6c3a1d69a440338a27f355df9ad6b0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:50 [async_llm.py:261] Added request cmpl-bd6c3a1d69a440338a27f355df9ad6b0-0.
INFO 03-02 01:23:51 [logger.py:42] Received request cmpl-8755b1e100da47d882fee09e1982ed8c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:51 [async_llm.py:261] Added request cmpl-8755b1e100da47d882fee09e1982ed8c-0.
INFO 03-02 01:23:52 [logger.py:42] Received request cmpl-e768a7c0ae464410bdda4d461ca58bb1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:52 [async_llm.py:261] Added request cmpl-e768a7c0ae464410bdda4d461ca58bb1-0.
INFO 03-02 01:23:53 [logger.py:42] Received request cmpl-a99ccd846de14639a2fc4f22e96faccd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:53 [async_llm.py:261] Added request cmpl-a99ccd846de14639a2fc4f22e96faccd-0.
INFO 03-02 01:23:55 [logger.py:42] Received request cmpl-a0e2630cec8d4e80a545f8d601497a6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:55 [async_llm.py:261] Added request cmpl-a0e2630cec8d4e80a545f8d601497a6a-0.
INFO 03-02 01:23:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:23:56 [logger.py:42] Received request cmpl-75a24eb9bd9b4a8a8755ba99504812d0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:56 [async_llm.py:261] Added request cmpl-75a24eb9bd9b4a8a8755ba99504812d0-0.
INFO 03-02 01:23:57 [logger.py:42] Received request cmpl-799fda283ec44cb3a39bf8eb19413aed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:57 [async_llm.py:261] Added request cmpl-799fda283ec44cb3a39bf8eb19413aed-0.
INFO 03-02 01:23:58 [logger.py:42] Received request cmpl-c27073543135452c87db68c32651a305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:58 [async_llm.py:261] Added request cmpl-c27073543135452c87db68c32651a305-0.
INFO 03-02 01:23:59 [logger.py:42] Received request cmpl-03f602bfde7649f2a3be12403fc0b39b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:23:59 [async_llm.py:261] Added request cmpl-03f602bfde7649f2a3be12403fc0b39b-0.
INFO 03-02 01:24:00 [logger.py:42] Received request cmpl-dabb0b601222403983a424b81f09e35b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:00 [async_llm.py:261] Added request cmpl-dabb0b601222403983a424b81f09e35b-0.
INFO 03-02 01:24:01 [logger.py:42] Received request cmpl-7de24d8abd8140b2b5c1f46df7c1f270-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:01 [async_llm.py:261] Added request cmpl-7de24d8abd8140b2b5c1f46df7c1f270-0.
INFO 03-02 01:24:02 [logger.py:42] Received request cmpl-1c464516d7be4455bc3f2fa5c16bde3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:02 [async_llm.py:261] Added request cmpl-1c464516d7be4455bc3f2fa5c16bde3d-0.
INFO 03-02 01:24:03 [logger.py:42] Received request cmpl-38e072968aee42dea16b17a10b080347-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:03 [async_llm.py:261] Added request cmpl-38e072968aee42dea16b17a10b080347-0.
INFO 03-02 01:24:04 [logger.py:42] Received request cmpl-761652eb473f42c48c14ef7143af380f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:04 [async_llm.py:261] Added request cmpl-761652eb473f42c48c14ef7143af380f-0.
INFO 03-02 01:24:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:06 [logger.py:42] Received request cmpl-a1723b3c7a8041f6a4a18c0285ac068e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:06 [async_llm.py:261] Added request cmpl-a1723b3c7a8041f6a4a18c0285ac068e-0.
INFO 03-02 01:24:07 [logger.py:42] Received request cmpl-75f41360902f40f89abdb53d3c24b758-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:07 [async_llm.py:261] Added request cmpl-75f41360902f40f89abdb53d3c24b758-0.
INFO 03-02 01:24:08 [logger.py:42] Received request cmpl-3c9594d780c14160b34b4690cc9142cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:08 [async_llm.py:261] Added request cmpl-3c9594d780c14160b34b4690cc9142cc-0.
INFO 03-02 01:24:09 [logger.py:42] Received request cmpl-4af32c29b23d432093611e9fea20e3c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:09 [async_llm.py:261] Added request cmpl-4af32c29b23d432093611e9fea20e3c3-0.
INFO 03-02 01:24:10 [logger.py:42] Received request cmpl-48b1784aab4b4dc3a31f591aa1dad2f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:10 [async_llm.py:261] Added request cmpl-48b1784aab4b4dc3a31f591aa1dad2f7-0.
INFO 03-02 01:24:11 [logger.py:42] Received request cmpl-e84a2379ae1b4fadaf91461829f2b30c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:11 [async_llm.py:261] Added request cmpl-e84a2379ae1b4fadaf91461829f2b30c-0.
INFO 03-02 01:24:12 [logger.py:42] Received request cmpl-023e271fc4bc4e4aa13476487e178d7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:12 [async_llm.py:261] Added request cmpl-023e271fc4bc4e4aa13476487e178d7f-0.
INFO 03-02 01:24:13 [logger.py:42] Received request cmpl-3e9bad949c75436f94eb77cbdd28da85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:13 [async_llm.py:261] Added request cmpl-3e9bad949c75436f94eb77cbdd28da85-0.
INFO 03-02 01:24:14 [logger.py:42] Received request cmpl-e963eef6ff0b4929af54f4c70c4e2ff8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:14 [async_llm.py:261] Added request cmpl-e963eef6ff0b4929af54f4c70c4e2ff8-0.
INFO 03-02 01:24:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:15 [logger.py:42] Received request cmpl-6976164d725346ddac2d8ae27b9ceee8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:15 [async_llm.py:261] Added request cmpl-6976164d725346ddac2d8ae27b9ceee8-0.
INFO 03-02 01:24:16 [logger.py:42] Received request cmpl-a88038abe17149ada8a6114682778f7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:17 [async_llm.py:261] Added request cmpl-a88038abe17149ada8a6114682778f7c-0.
INFO 03-02 01:24:18 [logger.py:42] Received request cmpl-b7ebb498ab8e428eb2b4c7d4822e5fbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:18 [async_llm.py:261] Added request cmpl-b7ebb498ab8e428eb2b4c7d4822e5fbe-0.
INFO 03-02 01:24:19 [logger.py:42] Received request cmpl-c2ea4519154d4b248f9cd3568c41c98a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:19 [async_llm.py:261] Added request cmpl-c2ea4519154d4b248f9cd3568c41c98a-0.
INFO 03-02 01:24:20 [logger.py:42] Received request cmpl-40c83fee212e462f90d0ebcf677c4ae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:20 [async_llm.py:261] Added request cmpl-40c83fee212e462f90d0ebcf677c4ae7-0.
INFO 03-02 01:24:21 [logger.py:42] Received request cmpl-989f057382ec4e15a784bcc83b9df007-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:21 [async_llm.py:261] Added request cmpl-989f057382ec4e15a784bcc83b9df007-0.
INFO 03-02 01:24:22 [logger.py:42] Received request cmpl-3ea4f368e40640d2ba101a046c5195fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:22 [async_llm.py:261] Added request cmpl-3ea4f368e40640d2ba101a046c5195fa-0.
INFO 03-02 01:24:23 [logger.py:42] Received request cmpl-3148f5ddb0f240da800f15d54e0b928c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:23 [async_llm.py:261] Added request cmpl-3148f5ddb0f240da800f15d54e0b928c-0.
INFO 03-02 01:24:24 [logger.py:42] Received request cmpl-e2292d9ae7dc4620a8fda57b135f6a7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:24 [async_llm.py:261] Added request cmpl-e2292d9ae7dc4620a8fda57b135f6a7e-0.
INFO 03-02 01:24:25 [logger.py:42] Received request cmpl-ba8262a5124a4b59be20da70f7c3d97c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:25 [async_llm.py:261] Added request cmpl-ba8262a5124a4b59be20da70f7c3d97c-0.
INFO 03-02 01:24:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:26 [logger.py:42] Received request cmpl-87108596e2754bceabb51e4013b7160e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:26 [async_llm.py:261] Added request cmpl-87108596e2754bceabb51e4013b7160e-0.
INFO 03-02 01:24:27 [logger.py:42] Received request cmpl-74822a8258664dcbb612de313a0fa00b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:27 [async_llm.py:261] Added request cmpl-74822a8258664dcbb612de313a0fa00b-0.
INFO 03-02 01:24:29 [logger.py:42] Received request cmpl-0458accb9ce842909982602d496a407b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:29 [async_llm.py:261] Added request cmpl-0458accb9ce842909982602d496a407b-0.
INFO 03-02 01:24:30 [logger.py:42] Received request cmpl-40e85e3c26fb46329e23c1d127a18444-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:30 [async_llm.py:261] Added request cmpl-40e85e3c26fb46329e23c1d127a18444-0.
INFO 03-02 01:24:31 [logger.py:42] Received request cmpl-945971f9df0149d2a3c9a250fb46a110-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:31 [async_llm.py:261] Added request cmpl-945971f9df0149d2a3c9a250fb46a110-0.
INFO 03-02 01:24:32 [logger.py:42] Received request cmpl-800145b5daae4edab2517a3778e8626d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:32 [async_llm.py:261] Added request cmpl-800145b5daae4edab2517a3778e8626d-0.
INFO 03-02 01:24:33 [logger.py:42] Received request cmpl-15a30e472d254b3d898aac8512704955-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:33 [async_llm.py:261] Added request cmpl-15a30e472d254b3d898aac8512704955-0.
INFO 03-02 01:24:34 [logger.py:42] Received request cmpl-a778cfd7c1ef4a62a682c0772f7e8e78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:34 [async_llm.py:261] Added request cmpl-a778cfd7c1ef4a62a682c0772f7e8e78-0.
INFO 03-02 01:24:35 [logger.py:42] Received request cmpl-667b68dd856941c8b68b9df93c3c9e8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:35 [async_llm.py:261] Added request cmpl-667b68dd856941c8b68b9df93c3c9e8e-0.
INFO 03-02 01:24:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:36 [logger.py:42] Received request cmpl-b29e1fe709b14fd6ac91949135549fa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:36 [async_llm.py:261] Added request cmpl-b29e1fe709b14fd6ac91949135549fa7-0.
INFO 03-02 01:24:37 [logger.py:42] Received request cmpl-a72768933ed546c4aebecccbb447539e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:37 [async_llm.py:261] Added request cmpl-a72768933ed546c4aebecccbb447539e-0.
INFO 03-02 01:24:38 [logger.py:42] Received request cmpl-da80c2074710422e8d44759bf5026c62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:38 [async_llm.py:261] Added request cmpl-da80c2074710422e8d44759bf5026c62-0.
INFO 03-02 01:24:40 [logger.py:42] Received request cmpl-7d5f4248b9ae4fccb1d82a621393edca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:40 [async_llm.py:261] Added request cmpl-7d5f4248b9ae4fccb1d82a621393edca-0.
INFO 03-02 01:24:41 [logger.py:42] Received request cmpl-47049b456cfb4c1097c580e3bb947919-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:41 [async_llm.py:261] Added request cmpl-47049b456cfb4c1097c580e3bb947919-0.
INFO 03-02 01:24:42 [logger.py:42] Received request cmpl-579f1471f7ec4a64a79833dc02b24376-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:42 [async_llm.py:261] Added request cmpl-579f1471f7ec4a64a79833dc02b24376-0.
INFO 03-02 01:24:43 [logger.py:42] Received request cmpl-613a7b16f8ab44c1bd3cdf3b18036317-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:43 [async_llm.py:261] Added request cmpl-613a7b16f8ab44c1bd3cdf3b18036317-0.
INFO 03-02 01:24:44 [logger.py:42] Received request cmpl-95ba2b7201ab4346ba04ee80f5187efd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:44 [async_llm.py:261] Added request cmpl-95ba2b7201ab4346ba04ee80f5187efd-0.
INFO 03-02 01:24:45 [logger.py:42] Received request cmpl-e3a8f109b8104eab9ec81005adfbf96e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:45 [async_llm.py:261] Added request cmpl-e3a8f109b8104eab9ec81005adfbf96e-0.
INFO 03-02 01:24:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:46 [logger.py:42] Received request cmpl-a233debfecb240d69109c7184a34e87f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:46 [async_llm.py:261] Added request cmpl-a233debfecb240d69109c7184a34e87f-0.
INFO 03-02 01:24:47 [logger.py:42] Received request cmpl-5f090c3c3b7845b2b4182e4dfd37b717-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:47 [async_llm.py:261] Added request cmpl-5f090c3c3b7845b2b4182e4dfd37b717-0.
INFO 03-02 01:24:48 [logger.py:42] Received request cmpl-ce1d3f5c9a754aa9a3f1806a578b5bf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:48 [async_llm.py:261] Added request cmpl-ce1d3f5c9a754aa9a3f1806a578b5bf2-0.
INFO 03-02 01:24:49 [logger.py:42] Received request cmpl-dcc9d64242414569bd3fb9498568dec9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:49 [async_llm.py:261] Added request cmpl-dcc9d64242414569bd3fb9498568dec9-0.
INFO 03-02 01:24:50 [logger.py:42] Received request cmpl-13e1f72db1f04675b539d09e8a83587c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:50 [async_llm.py:261] Added request cmpl-13e1f72db1f04675b539d09e8a83587c-0.
INFO 03-02 01:24:52 [logger.py:42] Received request cmpl-c1df123e2066482cab25159315477990-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:52 [async_llm.py:261] Added request cmpl-c1df123e2066482cab25159315477990-0.
INFO 03-02 01:24:53 [logger.py:42] Received request cmpl-e4d54a5554794ef58b3e57d8329e0497-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:53 [async_llm.py:261] Added request cmpl-e4d54a5554794ef58b3e57d8329e0497-0.
INFO 03-02 01:24:54 [logger.py:42] Received request cmpl-ef08ce63ea5c4a2dbd7e3da26126207b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:54 [async_llm.py:261] Added request cmpl-ef08ce63ea5c4a2dbd7e3da26126207b-0.
INFO 03-02 01:24:55 [logger.py:42] Received request cmpl-1f8bb15149eb4149af8d5a19e13bd61c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:55 [async_llm.py:261] Added request cmpl-1f8bb15149eb4149af8d5a19e13bd61c-0.
INFO 03-02 01:24:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:24:56 [logger.py:42] Received request cmpl-226bb83e01624ba6aa1e09c516b9c1eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:56 [async_llm.py:261] Added request cmpl-226bb83e01624ba6aa1e09c516b9c1eb-0.
INFO 03-02 01:24:57 [logger.py:42] Received request cmpl-257de4438f8e494a9f1f6bedeb7c2616-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:57 [async_llm.py:261] Added request cmpl-257de4438f8e494a9f1f6bedeb7c2616-0.
INFO 03-02 01:24:58 [logger.py:42] Received request cmpl-4f34b085cf3840e9b289734e0e153d57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:58 [async_llm.py:261] Added request cmpl-4f34b085cf3840e9b289734e0e153d57-0.
INFO 03-02 01:24:59 [logger.py:42] Received request cmpl-25817e41be5c493995a5faf1994df52c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:24:59 [async_llm.py:261] Added request cmpl-25817e41be5c493995a5faf1994df52c-0.
INFO 03-02 01:25:00 [logger.py:42] Received request cmpl-2e4be0d276df4765b46548950fc17ae7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:00 [async_llm.py:261] Added request cmpl-2e4be0d276df4765b46548950fc17ae7-0.
INFO 03-02 01:25:01 [logger.py:42] Received request cmpl-02e482a78a4a4ff38b3c141300e79d3a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:01 [async_llm.py:261] Added request cmpl-02e482a78a4a4ff38b3c141300e79d3a-0.
INFO 03-02 01:25:03 [logger.py:42] Received request cmpl-bb724f9483314dd4b10217a8026e15ba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:03 [async_llm.py:261] Added request cmpl-bb724f9483314dd4b10217a8026e15ba-0.
INFO 03-02 01:25:04 [logger.py:42] Received request cmpl-f48eb04f951e4524b83d6d847013e616-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:04 [async_llm.py:261] Added request cmpl-f48eb04f951e4524b83d6d847013e616-0.
INFO 03-02 01:25:05 [logger.py:42] Received request cmpl-10d7fdac1eb44509900c38fd9ef7f927-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:05 [async_llm.py:261] Added request cmpl-10d7fdac1eb44509900c38fd9ef7f927-0.
INFO 03-02 01:25:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:06 [logger.py:42] Received request cmpl-ba4bbdbd5e03436680c9572c5dd4b486-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:06 [async_llm.py:261] Added request cmpl-ba4bbdbd5e03436680c9572c5dd4b486-0.
INFO 03-02 01:25:07 [logger.py:42] Received request cmpl-4b11a6617344489ebf185aa7287d324d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:07 [async_llm.py:261] Added request cmpl-4b11a6617344489ebf185aa7287d324d-0.
INFO 03-02 01:25:08 [logger.py:42] Received request cmpl-20415917c41344a6821354263cbe8068-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:08 [async_llm.py:261] Added request cmpl-20415917c41344a6821354263cbe8068-0.
INFO 03-02 01:25:09 [logger.py:42] Received request cmpl-63851db58f9246599bb93e536a359fb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:09 [async_llm.py:261] Added request cmpl-63851db58f9246599bb93e536a359fb4-0.
INFO 03-02 01:25:10 [logger.py:42] Received request cmpl-4339550e4da64cd7896170c71ad95bb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:10 [async_llm.py:261] Added request cmpl-4339550e4da64cd7896170c71ad95bb5-0.
INFO 03-02 01:25:11 [logger.py:42] Received request cmpl-ef9c349261b14bef9561ea8da4be6f85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:11 [async_llm.py:261] Added request cmpl-ef9c349261b14bef9561ea8da4be6f85-0.
INFO 03-02 01:25:12 [logger.py:42] Received request cmpl-ccb16a9ec12f40618460d22fb5f5b278-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:12 [async_llm.py:261] Added request cmpl-ccb16a9ec12f40618460d22fb5f5b278-0.
INFO 03-02 01:25:13 [logger.py:42] Received request cmpl-bdc67bafdb81489ea8357dee1010cfd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:13 [async_llm.py:261] Added request cmpl-bdc67bafdb81489ea8357dee1010cfd2-0.
INFO 03-02 01:25:15 [logger.py:42] Received request cmpl-2377381ff5e9460fb340b69b938e90de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:15 [async_llm.py:261] Added request cmpl-2377381ff5e9460fb340b69b938e90de-0.
INFO 03-02 01:25:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:16 [logger.py:42] Received request cmpl-ccd16165139343e8b4aaa060807e61d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:16 [async_llm.py:261] Added request cmpl-ccd16165139343e8b4aaa060807e61d2-0.
INFO 03-02 01:25:17 [logger.py:42] Received request cmpl-24e409cfbd744f2d864d9c198f832ca6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:17 [async_llm.py:261] Added request cmpl-24e409cfbd744f2d864d9c198f832ca6-0.
INFO 03-02 01:25:18 [logger.py:42] Received request cmpl-c02f758433fb4ba19646a696792bc305-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:18 [async_llm.py:261] Added request cmpl-c02f758433fb4ba19646a696792bc305-0.
INFO 03-02 01:25:19 [logger.py:42] Received request cmpl-caafbb26205a4f5a8db6fc0836dda29e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:19 [async_llm.py:261] Added request cmpl-caafbb26205a4f5a8db6fc0836dda29e-0.
INFO 03-02 01:25:20 [logger.py:42] Received request cmpl-0551baa03d7c46c3bfa9453bc812350d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:20 [async_llm.py:261] Added request cmpl-0551baa03d7c46c3bfa9453bc812350d-0.
INFO 03-02 01:25:21 [logger.py:42] Received request cmpl-7d9e758e3cb5423da3e2016ef8e0b4b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:21 [async_llm.py:261] Added request cmpl-7d9e758e3cb5423da3e2016ef8e0b4b5-0.
INFO 03-02 01:25:22 [logger.py:42] Received request cmpl-a768f330c5f04357b86a725104ee87b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:22 [async_llm.py:261] Added request cmpl-a768f330c5f04357b86a725104ee87b7-0.
INFO 03-02 01:25:23 [logger.py:42] Received request cmpl-f7edf5fd9072431ab5aaaa60ff4c2a39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:23 [async_llm.py:261] Added request cmpl-f7edf5fd9072431ab5aaaa60ff4c2a39-0.
INFO 03-02 01:25:24 [logger.py:42] Received request cmpl-de9d4e0002e44022b7307498ec43de77-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:24 [async_llm.py:261] Added request cmpl-de9d4e0002e44022b7307498ec43de77-0.
INFO 03-02 01:25:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:26 [logger.py:42] Received request cmpl-73ef0e8b1da74e29acd51a33831aed63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:26 [async_llm.py:261] Added request cmpl-73ef0e8b1da74e29acd51a33831aed63-0.
INFO 03-02 01:25:27 [logger.py:42] Received request cmpl-4df9a6641adc49fcae65ae1cb216ad85-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:27 [async_llm.py:261] Added request cmpl-4df9a6641adc49fcae65ae1cb216ad85-0.
INFO 03-02 01:25:28 [logger.py:42] Received request cmpl-2a976f2924e74eac9ad41154bf94a1e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:28 [async_llm.py:261] Added request cmpl-2a976f2924e74eac9ad41154bf94a1e8-0.
INFO 03-02 01:25:29 [logger.py:42] Received request cmpl-1d2443bbf846428d91394654f870f12b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:29 [async_llm.py:261] Added request cmpl-1d2443bbf846428d91394654f870f12b-0.
INFO 03-02 01:25:30 [logger.py:42] Received request cmpl-12f3cc8b02a540fa8a8fbf9ec955c815-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:30 [async_llm.py:261] Added request cmpl-12f3cc8b02a540fa8a8fbf9ec955c815-0.
INFO 03-02 01:25:31 [logger.py:42] Received request cmpl-835c58b62a8c47b88deb2495fef8c2be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:31 [async_llm.py:261] Added request cmpl-835c58b62a8c47b88deb2495fef8c2be-0.
INFO 03-02 01:25:32 [logger.py:42] Received request cmpl-f0bf19095f5f4274a76f9790f28895d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:32 [async_llm.py:261] Added request cmpl-f0bf19095f5f4274a76f9790f28895d3-0.
INFO 03-02 01:25:33 [logger.py:42] Received request cmpl-88b3c8a0c04f4e2798b17348833dae3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:33 [async_llm.py:261] Added request cmpl-88b3c8a0c04f4e2798b17348833dae3e-0.
INFO 03-02 01:25:34 [logger.py:42] Received request cmpl-9f117b0f8f1d446da5d16a07c6447ca2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:34 [async_llm.py:261] Added request cmpl-9f117b0f8f1d446da5d16a07c6447ca2-0.
INFO 03-02 01:25:35 [logger.py:42] Received request cmpl-1b6909e314c84689b3b1fb2b26f0d341-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:35 [async_llm.py:261] Added request cmpl-1b6909e314c84689b3b1fb2b26f0d341-0.
INFO 03-02 01:25:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:36 [logger.py:42] Received request cmpl-e38863a9b24c45d09690e4af3b54c0bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:36 [async_llm.py:261] Added request cmpl-e38863a9b24c45d09690e4af3b54c0bf-0.
INFO 03-02 01:25:38 [logger.py:42] Received request cmpl-38c1da68c3534b4db35bfe3c21eb8827-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:38 [async_llm.py:261] Added request cmpl-38c1da68c3534b4db35bfe3c21eb8827-0.
INFO 03-02 01:25:39 [logger.py:42] Received request cmpl-ef9cbe3108a84ea2aa663560a8e48b0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:39 [async_llm.py:261] Added request cmpl-ef9cbe3108a84ea2aa663560a8e48b0d-0.
INFO 03-02 01:25:40 [logger.py:42] Received request cmpl-46a15611dcbe4c8089819b9852d48818-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:40 [async_llm.py:261] Added request cmpl-46a15611dcbe4c8089819b9852d48818-0.
INFO 03-02 01:25:41 [logger.py:42] Received request cmpl-3c5ea28f60f64fbfa3d91b4044ad0f94-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:41 [async_llm.py:261] Added request cmpl-3c5ea28f60f64fbfa3d91b4044ad0f94-0.
INFO 03-02 01:25:42 [logger.py:42] Received request cmpl-bb3d972cb1d94de4b9a5262d7d204ef4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:42 [async_llm.py:261] Added request cmpl-bb3d972cb1d94de4b9a5262d7d204ef4-0.
INFO 03-02 01:25:43 [logger.py:42] Received request cmpl-48990fd3dc754d28a5292a1634c3461d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:43 [async_llm.py:261] Added request cmpl-48990fd3dc754d28a5292a1634c3461d-0.
INFO 03-02 01:25:44 [logger.py:42] Received request cmpl-1c6e8b7f5c1d41f4a6a5909bb9f71b0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:44 [async_llm.py:261] Added request cmpl-1c6e8b7f5c1d41f4a6a5909bb9f71b0d-0.
INFO 03-02 01:25:45 [logger.py:42] Received request cmpl-a4ab9e0498a442039a45a46659168ddd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:45 [async_llm.py:261] Added request cmpl-a4ab9e0498a442039a45a46659168ddd-0.
INFO 03-02 01:25:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:46 [logger.py:42] Received request cmpl-9ad774efb44345a88767d4c20d14aa3b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:46 [async_llm.py:261] Added request cmpl-9ad774efb44345a88767d4c20d14aa3b-0.
INFO 03-02 01:25:47 [logger.py:42] Received request cmpl-9ceb630fef684074968bba0af8401723-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:47 [async_llm.py:261] Added request cmpl-9ceb630fef684074968bba0af8401723-0.
INFO 03-02 01:25:49 [logger.py:42] Received request cmpl-01972e7cd0c24d5f8aae82499e2350b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:49 [async_llm.py:261] Added request cmpl-01972e7cd0c24d5f8aae82499e2350b7-0.
INFO 03-02 01:25:50 [logger.py:42] Received request cmpl-860dda8530a54259b723ac23415c7bc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:50 [async_llm.py:261] Added request cmpl-860dda8530a54259b723ac23415c7bc1-0.
INFO 03-02 01:25:51 [logger.py:42] Received request cmpl-fddd506e43234c76b67c1242225ec114-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:51 [async_llm.py:261] Added request cmpl-fddd506e43234c76b67c1242225ec114-0.
INFO 03-02 01:25:52 [logger.py:42] Received request cmpl-aeab8b95921c4e6e9b0762650728d31d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:52 [async_llm.py:261] Added request cmpl-aeab8b95921c4e6e9b0762650728d31d-0.
INFO 03-02 01:25:53 [logger.py:42] Received request cmpl-a7b8919308fa4585911f33add81e132f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:53 [async_llm.py:261] Added request cmpl-a7b8919308fa4585911f33add81e132f-0.
INFO 03-02 01:25:54 [logger.py:42] Received request cmpl-9b2979329c0648f2bae077b7222f6970-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:54 [async_llm.py:261] Added request cmpl-9b2979329c0648f2bae077b7222f6970-0.
INFO 03-02 01:25:55 [logger.py:42] Received request cmpl-0c0950f7302e4e31b8d1ba66ed0b9cc7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:55 [async_llm.py:261] Added request cmpl-0c0950f7302e4e31b8d1ba66ed0b9cc7-0.
INFO 03-02 01:25:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:25:56 [logger.py:42] Received request cmpl-267d15d0b9f8478aadd8eea9e012862b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:56 [async_llm.py:261] Added request cmpl-267d15d0b9f8478aadd8eea9e012862b-0.
INFO 03-02 01:25:57 [logger.py:42] Received request cmpl-fa5d062cc2d644c0a037e493ddcddf5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:57 [async_llm.py:261] Added request cmpl-fa5d062cc2d644c0a037e493ddcddf5d-0.
INFO 03-02 01:25:58 [logger.py:42] Received request cmpl-7871c1fbb7354817b9c8d5572223e562-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:25:58 [async_llm.py:261] Added request cmpl-7871c1fbb7354817b9c8d5572223e562-0.
INFO 03-02 01:25:59 [logger.py:42] Received request cmpl-ad8d9fa0538c4f93837a5ac44b393451-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:00 [async_llm.py:261] Added request cmpl-ad8d9fa0538c4f93837a5ac44b393451-0.
INFO 03-02 01:26:01 [logger.py:42] Received request cmpl-cfee40342aee4173bad243ba3b969216-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:01 [async_llm.py:261] Added request cmpl-cfee40342aee4173bad243ba3b969216-0.
INFO 03-02 01:26:02 [logger.py:42] Received request cmpl-f0d33f82e08140769e94c4632203f3ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:02 [async_llm.py:261] Added request cmpl-f0d33f82e08140769e94c4632203f3ee-0.
INFO 03-02 01:26:03 [logger.py:42] Received request cmpl-86d2e1a94f154ecd8adf9d3323b2a877-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:03 [async_llm.py:261] Added request cmpl-86d2e1a94f154ecd8adf9d3323b2a877-0.
INFO 03-02 01:26:04 [logger.py:42] Received request cmpl-23a4568249b9441bbbe78f8f08e8b636-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:04 [async_llm.py:261] Added request cmpl-23a4568249b9441bbbe78f8f08e8b636-0.
INFO 03-02 01:26:05 [logger.py:42] Received request cmpl-f0aac5b6aa3a48a7ba1c2b858d3cd0eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:05 [async_llm.py:261] Added request cmpl-f0aac5b6aa3a48a7ba1c2b858d3cd0eb-0.
INFO 03-02 01:26:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:06 [logger.py:42] Received request cmpl-12836807c34d4b1682e00582059c4d90-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:06 [async_llm.py:261] Added request cmpl-12836807c34d4b1682e00582059c4d90-0.
INFO 03-02 01:26:07 [logger.py:42] Received request cmpl-9a6ab139c5294328825863586452f8b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:07 [async_llm.py:261] Added request cmpl-9a6ab139c5294328825863586452f8b9-0.
INFO 03-02 01:26:08 [logger.py:42] Received request cmpl-eb7273858daa4f82ac5350e0ecbdbf1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:08 [async_llm.py:261] Added request cmpl-eb7273858daa4f82ac5350e0ecbdbf1d-0.
INFO 03-02 01:26:09 [logger.py:42] Received request cmpl-b02ecf64b3ff4adb8a689ed60872be1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:09 [async_llm.py:261] Added request cmpl-b02ecf64b3ff4adb8a689ed60872be1b-0.
INFO 03-02 01:26:10 [logger.py:42] Received request cmpl-45047c90d7af402e9879504b25e668af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:10 [async_llm.py:261] Added request cmpl-45047c90d7af402e9879504b25e668af-0.
INFO 03-02 01:26:12 [logger.py:42] Received request cmpl-b8abfe9ee7bc472f81f5cb1eb644186a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:12 [async_llm.py:261] Added request cmpl-b8abfe9ee7bc472f81f5cb1eb644186a-0.
INFO 03-02 01:26:13 [logger.py:42] Received request cmpl-337ea5b2dcb5432e8898a31d46f844d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:13 [async_llm.py:261] Added request cmpl-337ea5b2dcb5432e8898a31d46f844d1-0.
INFO 03-02 01:26:14 [logger.py:42] Received request cmpl-c184e3069a89471a8d2be3407f321035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:14 [async_llm.py:261] Added request cmpl-c184e3069a89471a8d2be3407f321035-0.
INFO 03-02 01:26:15 [logger.py:42] Received request cmpl-4aea985b88a04a22a1eea3d9ef63ac24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:15 [async_llm.py:261] Added request cmpl-4aea985b88a04a22a1eea3d9ef63ac24-0.
INFO 03-02 01:26:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:16 [logger.py:42] Received request cmpl-71ab297cedcc4aa5ab3ffbfac4cb7fc1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:16 [async_llm.py:261] Added request cmpl-71ab297cedcc4aa5ab3ffbfac4cb7fc1-0.
INFO 03-02 01:26:17 [logger.py:42] Received request cmpl-571670b91eb64a858174bd32e8ad53ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:17 [async_llm.py:261] Added request cmpl-571670b91eb64a858174bd32e8ad53ee-0.
INFO 03-02 01:26:18 [logger.py:42] Received request cmpl-7fe7f77deab54156940b8e72dba4c90d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:18 [async_llm.py:261] Added request cmpl-7fe7f77deab54156940b8e72dba4c90d-0.
INFO 03-02 01:26:19 [logger.py:42] Received request cmpl-3b404158e5db47eb86db011ba19d711b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:19 [async_llm.py:261] Added request cmpl-3b404158e5db47eb86db011ba19d711b-0.
INFO 03-02 01:26:20 [logger.py:42] Received request cmpl-e2ee85a38891497ba4841d58df34a0ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:20 [async_llm.py:261] Added request cmpl-e2ee85a38891497ba4841d58df34a0ad-0.
INFO 03-02 01:26:21 [logger.py:42] Received request cmpl-1f87066ca5d34730af1e4982fcdb8f9b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:21 [async_llm.py:261] Added request cmpl-1f87066ca5d34730af1e4982fcdb8f9b-0.
INFO 03-02 01:26:23 [logger.py:42] Received request cmpl-7d8a4d20466c49d28fd225ed45a83d49-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:23 [async_llm.py:261] Added request cmpl-7d8a4d20466c49d28fd225ed45a83d49-0.
INFO 03-02 01:26:24 [logger.py:42] Received request cmpl-06a894a4681b433e916ec7d6df321f0e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:24 [async_llm.py:261] Added request cmpl-06a894a4681b433e916ec7d6df321f0e-0.
INFO 03-02 01:26:25 [logger.py:42] Received request cmpl-f6ac123707f3449cadcc600c2476dd6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:25 [async_llm.py:261] Added request cmpl-f6ac123707f3449cadcc600c2476dd6e-0.
INFO 03-02 01:26:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:26 [logger.py:42] Received request cmpl-4ec673848f6443e3823c39e5726d717d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:26 [async_llm.py:261] Added request cmpl-4ec673848f6443e3823c39e5726d717d-0.
INFO 03-02 01:26:27 [logger.py:42] Received request cmpl-1447c976b00449fd867b62e85694839a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:27 [async_llm.py:261] Added request cmpl-1447c976b00449fd867b62e85694839a-0.
INFO 03-02 01:26:28 [logger.py:42] Received request cmpl-45d2ea20c3c247e59196f952be39289a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:28 [async_llm.py:261] Added request cmpl-45d2ea20c3c247e59196f952be39289a-0.
INFO 03-02 01:26:29 [logger.py:42] Received request cmpl-991cdf7c36b144819bfa27f9c6da9e80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:29 [async_llm.py:261] Added request cmpl-991cdf7c36b144819bfa27f9c6da9e80-0.
INFO 03-02 01:26:30 [logger.py:42] Received request cmpl-4290b4ac34704f11a5fe63cc81342fdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:30 [async_llm.py:261] Added request cmpl-4290b4ac34704f11a5fe63cc81342fdd-0.
INFO 03-02 01:26:31 [logger.py:42] Received request cmpl-cce1952eb1ea4ff0b469b4eebaaf3707-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:31 [async_llm.py:261] Added request cmpl-cce1952eb1ea4ff0b469b4eebaaf3707-0.
INFO 03-02 01:26:32 [logger.py:42] Received request cmpl-ec3201a3241146f08045c7476f6a9179-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:32 [async_llm.py:261] Added request cmpl-ec3201a3241146f08045c7476f6a9179-0.
INFO 03-02 01:26:33 [logger.py:42] Received request cmpl-fa20f9f1d24e44b59993bd4df2b3f400-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:33 [async_llm.py:261] Added request cmpl-fa20f9f1d24e44b59993bd4df2b3f400-0.
INFO 03-02 01:26:35 [logger.py:42] Received request cmpl-a6ae11952f53489b809dd7fbe473cfa8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:35 [async_llm.py:261] Added request cmpl-a6ae11952f53489b809dd7fbe473cfa8-0.
INFO 03-02 01:26:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:36 [logger.py:42] Received request cmpl-a36bfb210c4d420894cda4cbebb874b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:36 [async_llm.py:261] Added request cmpl-a36bfb210c4d420894cda4cbebb874b1-0.
INFO 03-02 01:26:37 [logger.py:42] Received request cmpl-750878ff74624b01aeb7591bf54b20b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:37 [async_llm.py:261] Added request cmpl-750878ff74624b01aeb7591bf54b20b6-0.
INFO 03-02 01:26:38 [logger.py:42] Received request cmpl-4aaad4c8ccd14e44854a71ea3cb7cf79-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:38 [async_llm.py:261] Added request cmpl-4aaad4c8ccd14e44854a71ea3cb7cf79-0.
INFO 03-02 01:26:39 [logger.py:42] Received request cmpl-d434a8504d3c433dab4597fc7c840ac8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:39 [async_llm.py:261] Added request cmpl-d434a8504d3c433dab4597fc7c840ac8-0.
INFO 03-02 01:26:40 [logger.py:42] Received request cmpl-06f16911e2ba4fb298f49d191e6f9861-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:40 [async_llm.py:261] Added request cmpl-06f16911e2ba4fb298f49d191e6f9861-0.
INFO 03-02 01:26:41 [logger.py:42] Received request cmpl-7c7358eb9cd2466dafe8e8b56942f5a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:41 [async_llm.py:261] Added request cmpl-7c7358eb9cd2466dafe8e8b56942f5a7-0.
INFO 03-02 01:26:42 [logger.py:42] Received request cmpl-d1caa92273ee4ace824029595f386342-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:42 [async_llm.py:261] Added request cmpl-d1caa92273ee4ace824029595f386342-0.
INFO 03-02 01:26:43 [logger.py:42] Received request cmpl-182af20862a4477182ea4e1278e97a05-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:43 [async_llm.py:261] Added request cmpl-182af20862a4477182ea4e1278e97a05-0.
INFO 03-02 01:26:44 [logger.py:42] Received request cmpl-7621d0f1895f4d1aa0ddabcc08c8a81b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:44 [async_llm.py:261] Added request cmpl-7621d0f1895f4d1aa0ddabcc08c8a81b-0.
INFO 03-02 01:26:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:46 [logger.py:42] Received request cmpl-23bc6b1e23d54d87a2cd40a11a1fad67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:46 [async_llm.py:261] Added request cmpl-23bc6b1e23d54d87a2cd40a11a1fad67-0.
INFO 03-02 01:26:47 [logger.py:42] Received request cmpl-29164c299d7744be97d87c9072f74f36-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:47 [async_llm.py:261] Added request cmpl-29164c299d7744be97d87c9072f74f36-0.
INFO 03-02 01:26:48 [logger.py:42] Received request cmpl-e6293b7247ba4c8d9e82cd3bdae35440-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:48 [async_llm.py:261] Added request cmpl-e6293b7247ba4c8d9e82cd3bdae35440-0.
INFO 03-02 01:26:49 [logger.py:42] Received request cmpl-c92157c1bf7945e0bd1035d31e7be802-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:49 [async_llm.py:261] Added request cmpl-c92157c1bf7945e0bd1035d31e7be802-0.
INFO 03-02 01:26:50 [logger.py:42] Received request cmpl-5c4fa92f557842899be8b98102ac3a1c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:50 [async_llm.py:261] Added request cmpl-5c4fa92f557842899be8b98102ac3a1c-0.
INFO 03-02 01:26:51 [logger.py:42] Received request cmpl-06205bcf58494c298570b473ed356c2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:51 [async_llm.py:261] Added request cmpl-06205bcf58494c298570b473ed356c2a-0.
INFO 03-02 01:26:52 [logger.py:42] Received request cmpl-0c23bda96b1c4b779439b88bf528ae26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:52 [async_llm.py:261] Added request cmpl-0c23bda96b1c4b779439b88bf528ae26-0.
INFO 03-02 01:26:53 [logger.py:42] Received request cmpl-4678645c1b834b9ea48b47a9ffd72944-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:53 [async_llm.py:261] Added request cmpl-4678645c1b834b9ea48b47a9ffd72944-0.
INFO 03-02 01:26:54 [logger.py:42] Received request cmpl-dc9a333d07af41389bca0c3e57d96181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:54 [async_llm.py:261] Added request cmpl-dc9a333d07af41389bca0c3e57d96181-0.
INFO 03-02 01:26:55 [logger.py:42] Received request cmpl-2a94749a865f46589c4f42ce05719aaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:55 [async_llm.py:261] Added request cmpl-2a94749a865f46589c4f42ce05719aaf-0.
INFO 03-02 01:26:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:26:56 [logger.py:42] Received request cmpl-c9348ba46b6147d3b8a6c84549929253-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:56 [async_llm.py:261] Added request cmpl-c9348ba46b6147d3b8a6c84549929253-0.
INFO 03-02 01:26:58 [logger.py:42] Received request cmpl-b1cf5d82798b4253ae38d870f1be993a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:58 [async_llm.py:261] Added request cmpl-b1cf5d82798b4253ae38d870f1be993a-0.
INFO 03-02 01:26:59 [logger.py:42] Received request cmpl-decb103790014b48ad6408da53d67481-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:26:59 [async_llm.py:261] Added request cmpl-decb103790014b48ad6408da53d67481-0.
INFO 03-02 01:27:00 [logger.py:42] Received request cmpl-25fd3daa9e84489cb39a0b9d397602f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:00 [async_llm.py:261] Added request cmpl-25fd3daa9e84489cb39a0b9d397602f2-0.
INFO 03-02 01:27:01 [logger.py:42] Received request cmpl-82396cd3756b482fb8d70f6709613686-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:01 [async_llm.py:261] Added request cmpl-82396cd3756b482fb8d70f6709613686-0.
INFO 03-02 01:27:02 [logger.py:42] Received request cmpl-12b446c31fd446a89413f57ddd1c8ac9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:02 [async_llm.py:261] Added request cmpl-12b446c31fd446a89413f57ddd1c8ac9-0.
INFO 03-02 01:27:03 [logger.py:42] Received request cmpl-9de0e46e0bd547ecb8716baf67a2f80a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:03 [async_llm.py:261] Added request cmpl-9de0e46e0bd547ecb8716baf67a2f80a-0.
INFO 03-02 01:27:04 [logger.py:42] Received request cmpl-87b41752d19c4a64818471db152b3921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:04 [async_llm.py:261] Added request cmpl-87b41752d19c4a64818471db152b3921-0.
INFO 03-02 01:27:05 [logger.py:42] Received request cmpl-089991be027f4d4e85a911f2d99e3838-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:05 [async_llm.py:261] Added request cmpl-089991be027f4d4e85a911f2d99e3838-0.
INFO 03-02 01:27:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:06 [logger.py:42] Received request cmpl-699bc8656c534816ad72e9e64fed5ecd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:06 [async_llm.py:261] Added request cmpl-699bc8656c534816ad72e9e64fed5ecd-0.
INFO 03-02 01:27:07 [logger.py:42] Received request cmpl-35d653f1a9124ccbb8bc99aa64bab63f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:07 [async_llm.py:261] Added request cmpl-35d653f1a9124ccbb8bc99aa64bab63f-0.
INFO 03-02 01:27:09 [logger.py:42] Received request cmpl-b93e743b98db46a49f4a3e71a8e523c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:09 [async_llm.py:261] Added request cmpl-b93e743b98db46a49f4a3e71a8e523c6-0.
INFO 03-02 01:27:10 [logger.py:42] Received request cmpl-5c2cac7d7d5e4461a47a4cab39727e72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:10 [async_llm.py:261] Added request cmpl-5c2cac7d7d5e4461a47a4cab39727e72-0.
INFO 03-02 01:27:11 [logger.py:42] Received request cmpl-fd1dae68ce024c629de21246ff75c35f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:11 [async_llm.py:261] Added request cmpl-fd1dae68ce024c629de21246ff75c35f-0.
INFO 03-02 01:27:12 [logger.py:42] Received request cmpl-13c04d844a4b473da79fda1db9de8c67-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:12 [async_llm.py:261] Added request cmpl-13c04d844a4b473da79fda1db9de8c67-0.
INFO 03-02 01:27:13 [logger.py:42] Received request cmpl-071379af1c4946a2874b55f9493c5009-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:13 [async_llm.py:261] Added request cmpl-071379af1c4946a2874b55f9493c5009-0.
INFO 03-02 01:27:14 [logger.py:42] Received request cmpl-de9da25d032a4c69827a2c4ad7a4ac6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:14 [async_llm.py:261] Added request cmpl-de9da25d032a4c69827a2c4ad7a4ac6d-0.
INFO 03-02 01:27:15 [logger.py:42] Received request cmpl-b8aa9edcc17e4458a72c702c0a4f67df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:15 [async_llm.py:261] Added request cmpl-b8aa9edcc17e4458a72c702c0a4f67df-0.
INFO 03-02 01:27:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:16 [logger.py:42] Received request cmpl-1193b0ccf2464ce4afef7efdc01ab5f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:16 [async_llm.py:261] Added request cmpl-1193b0ccf2464ce4afef7efdc01ab5f8-0.
INFO 03-02 01:27:17 [logger.py:42] Received request cmpl-0ad290db12d14a098e50010808be4ce6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:17 [async_llm.py:261] Added request cmpl-0ad290db12d14a098e50010808be4ce6-0.
INFO 03-02 01:27:18 [logger.py:42] Received request cmpl-6a125b9af03e4fa79fed09407625f0ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:18 [async_llm.py:261] Added request cmpl-6a125b9af03e4fa79fed09407625f0ab-0.
INFO 03-02 01:27:20 [logger.py:42] Received request cmpl-4b556148524f453e889b2deb67981e46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:20 [async_llm.py:261] Added request cmpl-4b556148524f453e889b2deb67981e46-0.
INFO 03-02 01:27:21 [logger.py:42] Received request cmpl-ee8e7ae58179401c9d4b042d967ef365-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:21 [async_llm.py:261] Added request cmpl-ee8e7ae58179401c9d4b042d967ef365-0.
INFO 03-02 01:27:22 [logger.py:42] Received request cmpl-e79ffb00c23b480aa00dac1c134dba4b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:22 [async_llm.py:261] Added request cmpl-e79ffb00c23b480aa00dac1c134dba4b-0.
INFO 03-02 01:27:23 [logger.py:42] Received request cmpl-b82e6887b374444386f9658330b7cca9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:23 [async_llm.py:261] Added request cmpl-b82e6887b374444386f9658330b7cca9-0.
INFO 03-02 01:27:24 [logger.py:42] Received request cmpl-fb60949a53da4c51b9965588d17fad58-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:24 [async_llm.py:261] Added request cmpl-fb60949a53da4c51b9965588d17fad58-0.
INFO 03-02 01:27:25 [logger.py:42] Received request cmpl-d7d96a6e97c34e1d8e02897826501b41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:25 [async_llm.py:261] Added request cmpl-d7d96a6e97c34e1d8e02897826501b41-0.
INFO 03-02 01:27:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:26 [logger.py:42] Received request cmpl-d0ab3fea059b4e1e8d9ebcc11b83d9e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:26 [async_llm.py:261] Added request cmpl-d0ab3fea059b4e1e8d9ebcc11b83d9e4-0.
INFO 03-02 01:27:27 [logger.py:42] Received request cmpl-42f939ad6af540ada94e0f00c512df81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:27 [async_llm.py:261] Added request cmpl-42f939ad6af540ada94e0f00c512df81-0.
INFO 03-02 01:27:28 [logger.py:42] Received request cmpl-e74f5fa1dce948998bad98fd6dcac508-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:28 [async_llm.py:261] Added request cmpl-e74f5fa1dce948998bad98fd6dcac508-0.
INFO 03-02 01:27:29 [logger.py:42] Received request cmpl-1151614918d94f6b9daeb2258488684c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:29 [async_llm.py:261] Added request cmpl-1151614918d94f6b9daeb2258488684c-0.
INFO 03-02 01:27:30 [logger.py:42] Received request cmpl-63da3e8188794e25b879f8e2aa3fda13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:30 [async_llm.py:261] Added request cmpl-63da3e8188794e25b879f8e2aa3fda13-0.
INFO 03-02 01:27:32 [logger.py:42] Received request cmpl-7f5b339d6432497da27f6c156dc6a6cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:32 [async_llm.py:261] Added request cmpl-7f5b339d6432497da27f6c156dc6a6cb-0.
INFO 03-02 01:27:33 [logger.py:42] Received request cmpl-c837ea3c98aa45b385029710142f4758-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:33 [async_llm.py:261] Added request cmpl-c837ea3c98aa45b385029710142f4758-0.
INFO 03-02 01:27:34 [logger.py:42] Received request cmpl-20b85cb3ecd64869a011181eee3aef8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:34 [async_llm.py:261] Added request cmpl-20b85cb3ecd64869a011181eee3aef8b-0.
INFO 03-02 01:27:35 [logger.py:42] Received request cmpl-92d5bd007c1b4cf9b0f6492820320ac9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:35 [async_llm.py:261] Added request cmpl-92d5bd007c1b4cf9b0f6492820320ac9-0.
INFO 03-02 01:27:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:36 [logger.py:42] Received request cmpl-1393d44e9bb04075adcdd466296e1152-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:36 [async_llm.py:261] Added request cmpl-1393d44e9bb04075adcdd466296e1152-0.
INFO 03-02 01:27:37 [logger.py:42] Received request cmpl-a704ec250ebf469a93fa592b2062c78a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:37 [async_llm.py:261] Added request cmpl-a704ec250ebf469a93fa592b2062c78a-0.
INFO 03-02 01:27:38 [logger.py:42] Received request cmpl-059c4b3f488944d784df69387d85c96d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:38 [async_llm.py:261] Added request cmpl-059c4b3f488944d784df69387d85c96d-0.
INFO 03-02 01:27:39 [logger.py:42] Received request cmpl-4928f932691646c699fc948331c12917-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:39 [async_llm.py:261] Added request cmpl-4928f932691646c699fc948331c12917-0.
INFO 03-02 01:27:40 [logger.py:42] Received request cmpl-9d2a57151c6c49388989b6991b74b229-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:40 [async_llm.py:261] Added request cmpl-9d2a57151c6c49388989b6991b74b229-0.
INFO 03-02 01:27:41 [logger.py:42] Received request cmpl-ba04ad843cbc4d6198bbabc6395560fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:41 [async_llm.py:261] Added request cmpl-ba04ad843cbc4d6198bbabc6395560fe-0.
INFO 03-02 01:27:43 [logger.py:42] Received request cmpl-a371c0be641a42a9ba9a8dfbd6e04aa7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:43 [async_llm.py:261] Added request cmpl-a371c0be641a42a9ba9a8dfbd6e04aa7-0.
INFO 03-02 01:27:44 [logger.py:42] Received request cmpl-b8764a05b9a44179908ae32be0bb7deb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:44 [async_llm.py:261] Added request cmpl-b8764a05b9a44179908ae32be0bb7deb-0.
INFO 03-02 01:27:45 [logger.py:42] Received request cmpl-21162ba6fba442399cce839685870cef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:45 [async_llm.py:261] Added request cmpl-21162ba6fba442399cce839685870cef-0.
INFO 03-02 01:27:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:46 [logger.py:42] Received request cmpl-3f7efa33fe6c4b15b80c430127ae427e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:46 [async_llm.py:261] Added request cmpl-3f7efa33fe6c4b15b80c430127ae427e-0.
INFO 03-02 01:27:47 [logger.py:42] Received request cmpl-f48dc23ecc924f969db8974bab41b341-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:47 [async_llm.py:261] Added request cmpl-f48dc23ecc924f969db8974bab41b341-0.
INFO 03-02 01:27:48 [logger.py:42] Received request cmpl-3b596a1effa44d06a737d9b895634c4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:48 [async_llm.py:261] Added request cmpl-3b596a1effa44d06a737d9b895634c4f-0.
INFO 03-02 01:27:49 [logger.py:42] Received request cmpl-421573a45e2b42fab659add3d89a021f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:49 [async_llm.py:261] Added request cmpl-421573a45e2b42fab659add3d89a021f-0.
INFO 03-02 01:27:50 [logger.py:42] Received request cmpl-ccff9409e4a34d789812a9792d658dc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:50 [async_llm.py:261] Added request cmpl-ccff9409e4a34d789812a9792d658dc5-0.
INFO 03-02 01:27:51 [logger.py:42] Received request cmpl-58ab08c7a03c4d89ac7e1d6aa968c053-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:51 [async_llm.py:261] Added request cmpl-58ab08c7a03c4d89ac7e1d6aa968c053-0.
INFO 03-02 01:27:52 [logger.py:42] Received request cmpl-2e563da232894259a3272b8e265d353c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:52 [async_llm.py:261] Added request cmpl-2e563da232894259a3272b8e265d353c-0.
INFO 03-02 01:27:53 [logger.py:42] Received request cmpl-b298702c6167487cb1d7edf7d5ad1800-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:53 [async_llm.py:261] Added request cmpl-b298702c6167487cb1d7edf7d5ad1800-0.
INFO 03-02 01:27:55 [logger.py:42] Received request cmpl-d2c6c1ee58fe4131a6314ac7d3f8220a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:55 [async_llm.py:261] Added request cmpl-d2c6c1ee58fe4131a6314ac7d3f8220a-0.
INFO 03-02 01:27:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:27:56 [logger.py:42] Received request cmpl-5fac629fc9114a39aa9da080a30743a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:56 [async_llm.py:261] Added request cmpl-5fac629fc9114a39aa9da080a30743a5-0.
INFO 03-02 01:27:57 [logger.py:42] Received request cmpl-eea9cc637a5e4afc9cca81bc214e52ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:57 [async_llm.py:261] Added request cmpl-eea9cc637a5e4afc9cca81bc214e52ec-0.
INFO 03-02 01:27:58 [logger.py:42] Received request cmpl-8ada8627ab664a4e98845635c9860ff9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:58 [async_llm.py:261] Added request cmpl-8ada8627ab664a4e98845635c9860ff9-0.
INFO 03-02 01:27:59 [logger.py:42] Received request cmpl-f7bb99ee3fce4cc9be30f5370e02cd40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:27:59 [async_llm.py:261] Added request cmpl-f7bb99ee3fce4cc9be30f5370e02cd40-0.
INFO 03-02 01:28:00 [logger.py:42] Received request cmpl-70926b1fad8e42398cd81e309588112d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:00 [async_llm.py:261] Added request cmpl-70926b1fad8e42398cd81e309588112d-0.
INFO 03-02 01:28:01 [logger.py:42] Received request cmpl-05bb28fc7df644109a323723038205d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:01 [async_llm.py:261] Added request cmpl-05bb28fc7df644109a323723038205d8-0.
INFO 03-02 01:28:02 [logger.py:42] Received request cmpl-47583d86666943eda91674184a5e4044-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:02 [async_llm.py:261] Added request cmpl-47583d86666943eda91674184a5e4044-0.
INFO 03-02 01:28:03 [logger.py:42] Received request cmpl-f57c1e0705b54eb3a83b02f452060e5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:03 [async_llm.py:261] Added request cmpl-f57c1e0705b54eb3a83b02f452060e5a-0.
INFO 03-02 01:28:04 [logger.py:42] Received request cmpl-e14fbcf92ec843e78af42d79d44f22df-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:04 [async_llm.py:261] Added request cmpl-e14fbcf92ec843e78af42d79d44f22df-0.
INFO 03-02 01:28:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:06 [logger.py:42] Received request cmpl-9380a7bf7d914929acc2c0b044bb1393-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:06 [async_llm.py:261] Added request cmpl-9380a7bf7d914929acc2c0b044bb1393-0.
INFO 03-02 01:28:07 [logger.py:42] Received request cmpl-19b0cdaf6518407bb22797d2d5d55b52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:07 [async_llm.py:261] Added request cmpl-19b0cdaf6518407bb22797d2d5d55b52-0.
INFO 03-02 01:28:08 [logger.py:42] Received request cmpl-59a6f7d046d547cb8663aaa2b16b86ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:08 [async_llm.py:261] Added request cmpl-59a6f7d046d547cb8663aaa2b16b86ce-0.
INFO 03-02 01:28:09 [logger.py:42] Received request cmpl-4d10439751af494aa5e351bcbb29b897-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:09 [async_llm.py:261] Added request cmpl-4d10439751af494aa5e351bcbb29b897-0.
INFO 03-02 01:28:10 [logger.py:42] Received request cmpl-82797bf678d34f059078b6a81b46f6ca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:10 [async_llm.py:261] Added request cmpl-82797bf678d34f059078b6a81b46f6ca-0.
INFO 03-02 01:28:11 [logger.py:42] Received request cmpl-2948f8136a2d4748885f5e0a7c3f382a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:11 [async_llm.py:261] Added request cmpl-2948f8136a2d4748885f5e0a7c3f382a-0.
INFO 03-02 01:28:12 [logger.py:42] Received request cmpl-400cafa713ab476e92059627742ca16f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:12 [async_llm.py:261] Added request cmpl-400cafa713ab476e92059627742ca16f-0.
INFO 03-02 01:28:13 [logger.py:42] Received request cmpl-568d405d374843d48d1e079dad356da5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:13 [async_llm.py:261] Added request cmpl-568d405d374843d48d1e079dad356da5-0.
INFO 03-02 01:28:14 [logger.py:42] Received request cmpl-85a854c124d5420199ffa647af5503a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:14 [async_llm.py:261] Added request cmpl-85a854c124d5420199ffa647af5503a7-0.
INFO 03-02 01:28:15 [logger.py:42] Received request cmpl-7a2b109e7f0b4f719c2abfc92eddc387-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:15 [async_llm.py:261] Added request cmpl-7a2b109e7f0b4f719c2abfc92eddc387-0.
INFO 03-02 01:28:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:17 [logger.py:42] Received request cmpl-6151aefa82114395852fe9efbd6ef948-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:17 [async_llm.py:261] Added request cmpl-6151aefa82114395852fe9efbd6ef948-0.
INFO 03-02 01:28:18 [logger.py:42] Received request cmpl-6dc2031c78804bc0a7f08cb1ccee20bf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:18 [async_llm.py:261] Added request cmpl-6dc2031c78804bc0a7f08cb1ccee20bf-0.
INFO 03-02 01:28:19 [logger.py:42] Received request cmpl-adaae04c47624097b37385db8ad87072-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:19 [async_llm.py:261] Added request cmpl-adaae04c47624097b37385db8ad87072-0.
INFO 03-02 01:28:20 [logger.py:42] Received request cmpl-378dbcfa11444d52b2365135708fbd40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:20 [async_llm.py:261] Added request cmpl-378dbcfa11444d52b2365135708fbd40-0.
INFO 03-02 01:28:21 [logger.py:42] Received request cmpl-cbefcf455f6f4d25acb8d1103b846702-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:21 [async_llm.py:261] Added request cmpl-cbefcf455f6f4d25acb8d1103b846702-0.
INFO 03-02 01:28:22 [logger.py:42] Received request cmpl-360440b482f243bcb0fb5bbe34106a9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:22 [async_llm.py:261] Added request cmpl-360440b482f243bcb0fb5bbe34106a9c-0.
INFO 03-02 01:28:23 [logger.py:42] Received request cmpl-3fba6994e9c642508539d473eca9410a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:23 [async_llm.py:261] Added request cmpl-3fba6994e9c642508539d473eca9410a-0.
INFO 03-02 01:28:24 [logger.py:42] Received request cmpl-c21db6809cf24f97a8aa638a7d487bd8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:24 [async_llm.py:261] Added request cmpl-c21db6809cf24f97a8aa638a7d487bd8-0.
INFO 03-02 01:28:25 [logger.py:42] Received request cmpl-29fbb002217843b1b384153dc739c857-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:25 [async_llm.py:261] Added request cmpl-29fbb002217843b1b384153dc739c857-0.
INFO 03-02 01:28:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:26 [logger.py:42] Received request cmpl-5bce8bb473724ef2b7b98c88af229ed7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:26 [async_llm.py:261] Added request cmpl-5bce8bb473724ef2b7b98c88af229ed7-0.
INFO 03-02 01:28:27 [logger.py:42] Received request cmpl-c8806e55378b4f8f84f16fd9284beef8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:27 [async_llm.py:261] Added request cmpl-c8806e55378b4f8f84f16fd9284beef8-0.
INFO 03-02 01:28:29 [logger.py:42] Received request cmpl-8e1040f6c9ad46f9af3a5d05db3a5abf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:29 [async_llm.py:261] Added request cmpl-8e1040f6c9ad46f9af3a5d05db3a5abf-0.
INFO 03-02 01:28:30 [logger.py:42] Received request cmpl-0f51a943cdd6418da3628b6394446a08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:30 [async_llm.py:261] Added request cmpl-0f51a943cdd6418da3628b6394446a08-0.
INFO 03-02 01:28:31 [logger.py:42] Received request cmpl-eefe9cf5c6f449c8a61e8346ef4faa1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:31 [async_llm.py:261] Added request cmpl-eefe9cf5c6f449c8a61e8346ef4faa1b-0.
INFO 03-02 01:28:32 [logger.py:42] Received request cmpl-8fb53ecdc03b48098e88b3cbb77ad1c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:32 [async_llm.py:261] Added request cmpl-8fb53ecdc03b48098e88b3cbb77ad1c3-0.
INFO 03-02 01:28:33 [logger.py:42] Received request cmpl-7b5f7a83c0c342f78fc26f0fb8e5c638-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:33 [async_llm.py:261] Added request cmpl-7b5f7a83c0c342f78fc26f0fb8e5c638-0.
INFO 03-02 01:28:34 [logger.py:42] Received request cmpl-46be421ff9f14cd0963c9bdceebb6e34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:34 [async_llm.py:261] Added request cmpl-46be421ff9f14cd0963c9bdceebb6e34-0.
INFO 03-02 01:28:35 [logger.py:42] Received request cmpl-2236f9edf2434423b17532a908f4080e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:35 [async_llm.py:261] Added request cmpl-2236f9edf2434423b17532a908f4080e-0.
INFO 03-02 01:28:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:36 [logger.py:42] Received request cmpl-f98f7ac49d62427d82e1b3f9167abc5c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:36 [async_llm.py:261] Added request cmpl-f98f7ac49d62427d82e1b3f9167abc5c-0.
INFO 03-02 01:28:37 [logger.py:42] Received request cmpl-1d0b2db5241946eab39900af89c6e186-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:37 [async_llm.py:261] Added request cmpl-1d0b2db5241946eab39900af89c6e186-0.
INFO 03-02 01:28:38 [logger.py:42] Received request cmpl-1b1e1e44a2d7485aaf1f9968cbe4ec10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:38 [async_llm.py:261] Added request cmpl-1b1e1e44a2d7485aaf1f9968cbe4ec10-0.
INFO 03-02 01:28:40 [logger.py:42] Received request cmpl-f9d1c799a9d24d0eb2e31d92d0fba8cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:40 [async_llm.py:261] Added request cmpl-f9d1c799a9d24d0eb2e31d92d0fba8cc-0.
INFO 03-02 01:28:41 [logger.py:42] Received request cmpl-017b617281f14bf490960fef7f2d8a64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:41 [async_llm.py:261] Added request cmpl-017b617281f14bf490960fef7f2d8a64-0.
INFO 03-02 01:28:42 [logger.py:42] Received request cmpl-537211e2a482401789b0211871876f17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:42 [async_llm.py:261] Added request cmpl-537211e2a482401789b0211871876f17-0.
INFO 03-02 01:28:43 [logger.py:42] Received request cmpl-33cbb314a03845df96963157d5e14c51-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:43 [async_llm.py:261] Added request cmpl-33cbb314a03845df96963157d5e14c51-0.
INFO 03-02 01:28:44 [logger.py:42] Received request cmpl-963a5adbf0ba4cf3ac9fa86cc644dd63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:44 [async_llm.py:261] Added request cmpl-963a5adbf0ba4cf3ac9fa86cc644dd63-0.
INFO 03-02 01:28:45 [logger.py:42] Received request cmpl-673bdd35796f406ebabbfcc4cdb2c14e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:45 [async_llm.py:261] Added request cmpl-673bdd35796f406ebabbfcc4cdb2c14e-0.
INFO 03-02 01:28:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:46 [logger.py:42] Received request cmpl-ad6cdbd735334f2bb8707b71e4608c68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:46 [async_llm.py:261] Added request cmpl-ad6cdbd735334f2bb8707b71e4608c68-0.
INFO 03-02 01:28:47 [logger.py:42] Received request cmpl-47396eac39154acc9ec9c117e0e1aa8a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:47 [async_llm.py:261] Added request cmpl-47396eac39154acc9ec9c117e0e1aa8a-0.
INFO 03-02 01:28:48 [logger.py:42] Received request cmpl-79a31b9f582649ebaa77e617ab56374b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:48 [async_llm.py:261] Added request cmpl-79a31b9f582649ebaa77e617ab56374b-0.
INFO 03-02 01:28:49 [logger.py:42] Received request cmpl-5478fa92ae37456d899f53b2ac0404c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:49 [async_llm.py:261] Added request cmpl-5478fa92ae37456d899f53b2ac0404c7-0.
INFO 03-02 01:28:51 [logger.py:42] Received request cmpl-4b6f88884c88427fbd451891ed4de5ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:51 [async_llm.py:261] Added request cmpl-4b6f88884c88427fbd451891ed4de5ed-0.
INFO 03-02 01:28:52 [logger.py:42] Received request cmpl-f3596567f59b4124afe55fe4da911a71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:52 [async_llm.py:261] Added request cmpl-f3596567f59b4124afe55fe4da911a71-0.
INFO 03-02 01:28:53 [logger.py:42] Received request cmpl-8b31e25e24a345b3ae3e301742323a8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:53 [async_llm.py:261] Added request cmpl-8b31e25e24a345b3ae3e301742323a8b-0.
INFO 03-02 01:28:54 [logger.py:42] Received request cmpl-c3c25c1fdd7940b0926bec281547e0ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:54 [async_llm.py:261] Added request cmpl-c3c25c1fdd7940b0926bec281547e0ac-0.
INFO 03-02 01:28:55 [logger.py:42] Received request cmpl-63864f68c300413199df52891ababdc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:55 [async_llm.py:261] Added request cmpl-63864f68c300413199df52891ababdc0-0.
INFO 03-02 01:28:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:28:56 [logger.py:42] Received request cmpl-37d4ba7dfb6d48a9ad2702857f4e5282-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:56 [async_llm.py:261] Added request cmpl-37d4ba7dfb6d48a9ad2702857f4e5282-0.
INFO 03-02 01:28:57 [logger.py:42] Received request cmpl-30701c48832f4484ade28acb2df89535-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:57 [async_llm.py:261] Added request cmpl-30701c48832f4484ade28acb2df89535-0.
INFO 03-02 01:28:58 [logger.py:42] Received request cmpl-6b29417649af4f778e7a30aaecdf959c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:58 [async_llm.py:261] Added request cmpl-6b29417649af4f778e7a30aaecdf959c-0.
INFO 03-02 01:28:59 [logger.py:42] Received request cmpl-921f17c4e1594de6b04404d99e9f0e03-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:28:59 [async_llm.py:261] Added request cmpl-921f17c4e1594de6b04404d99e9f0e03-0.
INFO 03-02 01:29:00 [logger.py:42] Received request cmpl-8046c6c1c6a74e72821e2dd52c43caa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:00 [async_llm.py:261] Added request cmpl-8046c6c1c6a74e72821e2dd52c43caa2-0.
INFO 03-02 01:29:01 [logger.py:42] Received request cmpl-5990803ff9a5422a8ba30b473194616a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:01 [async_llm.py:261] Added request cmpl-5990803ff9a5422a8ba30b473194616a-0.
INFO 03-02 01:29:03 [logger.py:42] Received request cmpl-7c57475836da489290aadca7d73bfbe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:03 [async_llm.py:261] Added request cmpl-7c57475836da489290aadca7d73bfbe0-0.
INFO 03-02 01:29:04 [logger.py:42] Received request cmpl-713f0b97be4b47fda3c56a45c7e6f307-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:04 [async_llm.py:261] Added request cmpl-713f0b97be4b47fda3c56a45c7e6f307-0.
INFO 03-02 01:29:05 [logger.py:42] Received request cmpl-d0a795fe99d54d899357c1e1c14c381e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:05 [async_llm.py:261] Added request cmpl-d0a795fe99d54d899357c1e1c14c381e-0.
INFO 03-02 01:29:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:06 [logger.py:42] Received request cmpl-0ce5c9a9879e4db489dc1df7826c2ce0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:06 [async_llm.py:261] Added request cmpl-0ce5c9a9879e4db489dc1df7826c2ce0-0.
INFO 03-02 01:29:07 [logger.py:42] Received request cmpl-5d200cfe9f6e4b04bedf3b0520877a7e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:07 [async_llm.py:261] Added request cmpl-5d200cfe9f6e4b04bedf3b0520877a7e-0.
INFO 03-02 01:29:08 [logger.py:42] Received request cmpl-d5fbac3369f6482d90f75206aa3cc338-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:08 [async_llm.py:261] Added request cmpl-d5fbac3369f6482d90f75206aa3cc338-0.
INFO 03-02 01:29:09 [logger.py:42] Received request cmpl-b39a07d4c4ea48379218a2f02e07d1c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:09 [async_llm.py:261] Added request cmpl-b39a07d4c4ea48379218a2f02e07d1c0-0.
INFO 03-02 01:29:10 [logger.py:42] Received request cmpl-79f47f4cdfa84caa8156a9dbf267bd62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:10 [async_llm.py:261] Added request cmpl-79f47f4cdfa84caa8156a9dbf267bd62-0.
INFO 03-02 01:29:11 [logger.py:42] Received request cmpl-2e6105fb36344d0c83a9e5f1cf944726-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:11 [async_llm.py:261] Added request cmpl-2e6105fb36344d0c83a9e5f1cf944726-0.
INFO 03-02 01:29:12 [logger.py:42] Received request cmpl-71f5126d20954bec81f4f67d45f1f1b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:12 [async_llm.py:261] Added request cmpl-71f5126d20954bec81f4f67d45f1f1b4-0.
INFO 03-02 01:29:14 [logger.py:42] Received request cmpl-5e41d0f09c6c446dbe506363a7533c26-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:14 [async_llm.py:261] Added request cmpl-5e41d0f09c6c446dbe506363a7533c26-0.
INFO 03-02 01:29:15 [logger.py:42] Received request cmpl-2a3a91728c364de69024012df7041f29-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:15 [async_llm.py:261] Added request cmpl-2a3a91728c364de69024012df7041f29-0.
INFO 03-02 01:29:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:16 [logger.py:42] Received request cmpl-42a283618ff84de6b34582c2a8da401d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:16 [async_llm.py:261] Added request cmpl-42a283618ff84de6b34582c2a8da401d-0.
INFO 03-02 01:29:17 [logger.py:42] Received request cmpl-0621499a165e47b298ae221e47e40ecb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:17 [async_llm.py:261] Added request cmpl-0621499a165e47b298ae221e47e40ecb-0.
INFO 03-02 01:29:18 [logger.py:42] Received request cmpl-f7c60d8efcea4ec094d1f70f09754c7c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:18 [async_llm.py:261] Added request cmpl-f7c60d8efcea4ec094d1f70f09754c7c-0.
INFO 03-02 01:29:19 [logger.py:42] Received request cmpl-78b2b6e777be447b9b68ddb2a141048c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:19 [async_llm.py:261] Added request cmpl-78b2b6e777be447b9b68ddb2a141048c-0.
INFO 03-02 01:29:20 [logger.py:42] Received request cmpl-3d238e6f7cf744c8b7d8a9aba69363e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:20 [async_llm.py:261] Added request cmpl-3d238e6f7cf744c8b7d8a9aba69363e2-0.
INFO 03-02 01:29:21 [logger.py:42] Received request cmpl-935f79906ef24c6e8152eca9674d20af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:21 [async_llm.py:261] Added request cmpl-935f79906ef24c6e8152eca9674d20af-0.
INFO 03-02 01:29:22 [logger.py:42] Received request cmpl-5a0365533c774dd8a7f7329a12def2ce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:22 [async_llm.py:261] Added request cmpl-5a0365533c774dd8a7f7329a12def2ce-0.
INFO 03-02 01:29:23 [logger.py:42] Received request cmpl-55a0d44c79cf4f0bbfc875887cec005f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:23 [async_llm.py:261] Added request cmpl-55a0d44c79cf4f0bbfc875887cec005f-0.
INFO 03-02 01:29:25 [logger.py:42] Received request cmpl-127e1a98a2f749cea86d2a9bdd3a4cd7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:25 [async_llm.py:261] Added request cmpl-127e1a98a2f749cea86d2a9bdd3a4cd7-0.
INFO 03-02 01:29:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:26 [logger.py:42] Received request cmpl-14ad9ab6b7c74e629680dc44063b3198-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:26 [async_llm.py:261] Added request cmpl-14ad9ab6b7c74e629680dc44063b3198-0.
INFO 03-02 01:29:27 [logger.py:42] Received request cmpl-c03b33b154b742d38d6630ed8dc726b6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:27 [async_llm.py:261] Added request cmpl-c03b33b154b742d38d6630ed8dc726b6-0.
INFO 03-02 01:29:28 [logger.py:42] Received request cmpl-9ffb534ce5704a10bd4be12ba45c69ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:28 [async_llm.py:261] Added request cmpl-9ffb534ce5704a10bd4be12ba45c69ea-0.
INFO 03-02 01:29:29 [logger.py:42] Received request cmpl-da3e434b3c414a78bf551772b3fe4f61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:29 [async_llm.py:261] Added request cmpl-da3e434b3c414a78bf551772b3fe4f61-0.
INFO 03-02 01:29:30 [logger.py:42] Received request cmpl-965182203ada4c0cb33105d90df0a698-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:30 [async_llm.py:261] Added request cmpl-965182203ada4c0cb33105d90df0a698-0.
INFO 03-02 01:29:31 [logger.py:42] Received request cmpl-6c98da9a99c8467885d6cdc47dd8e42e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:31 [async_llm.py:261] Added request cmpl-6c98da9a99c8467885d6cdc47dd8e42e-0.
INFO 03-02 01:29:32 [logger.py:42] Received request cmpl-42e5b885b88145528f9a0f3410d30f0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:32 [async_llm.py:261] Added request cmpl-42e5b885b88145528f9a0f3410d30f0b-0.
INFO 03-02 01:29:33 [logger.py:42] Received request cmpl-e3ecb48f4a6a499181f60863c81e6051-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:33 [async_llm.py:261] Added request cmpl-e3ecb48f4a6a499181f60863c81e6051-0.
INFO 03-02 01:29:34 [logger.py:42] Received request cmpl-57a010c26c7640d7bbb5f45d031cbd2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:34 [async_llm.py:261] Added request cmpl-57a010c26c7640d7bbb5f45d031cbd2a-0.
INFO 03-02 01:29:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:35 [logger.py:42] Received request cmpl-fdf9683b3995419aa8849ec2230aa404-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:35 [async_llm.py:261] Added request cmpl-fdf9683b3995419aa8849ec2230aa404-0.
INFO 03-02 01:29:37 [logger.py:42] Received request cmpl-3c1e2cff42eb4837a16b3725b29f4ecf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:37 [async_llm.py:261] Added request cmpl-3c1e2cff42eb4837a16b3725b29f4ecf-0.
INFO 03-02 01:29:38 [logger.py:42] Received request cmpl-0e491f4fac39417cbfe48c4b22abe6ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:38 [async_llm.py:261] Added request cmpl-0e491f4fac39417cbfe48c4b22abe6ec-0.
INFO 03-02 01:29:39 [logger.py:42] Received request cmpl-9b445812e29844dcb317aa53f52e3f39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:39 [async_llm.py:261] Added request cmpl-9b445812e29844dcb317aa53f52e3f39-0.
INFO 03-02 01:29:40 [logger.py:42] Received request cmpl-b10b870d5a7a4292809edb5548c8b238-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:40 [async_llm.py:261] Added request cmpl-b10b870d5a7a4292809edb5548c8b238-0.
INFO 03-02 01:29:41 [logger.py:42] Received request cmpl-e172e4663e7545de9ccf77b77ba3e659-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:41 [async_llm.py:261] Added request cmpl-e172e4663e7545de9ccf77b77ba3e659-0.
INFO 03-02 01:29:42 [logger.py:42] Received request cmpl-2b1fbec1673744d5a776e285f155ebbe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:42 [async_llm.py:261] Added request cmpl-2b1fbec1673744d5a776e285f155ebbe-0.
INFO 03-02 01:29:43 [logger.py:42] Received request cmpl-0b3d5d1dec304e3c8bdf96b2e6bddf3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:43 [async_llm.py:261] Added request cmpl-0b3d5d1dec304e3c8bdf96b2e6bddf3c-0.
INFO 03-02 01:29:44 [logger.py:42] Received request cmpl-d794b108951941b28988f0b640120959-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:44 [async_llm.py:261] Added request cmpl-d794b108951941b28988f0b640120959-0.
INFO 03-02 01:29:45 [logger.py:42] Received request cmpl-0a397a35765744e1b256b920e2a9b078-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:45 [async_llm.py:261] Added request cmpl-0a397a35765744e1b256b920e2a9b078-0.
INFO 03-02 01:29:45 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:46 [logger.py:42] Received request cmpl-0f3c2a6167d045539b0ae3b0b004867f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:46 [async_llm.py:261] Added request cmpl-0f3c2a6167d045539b0ae3b0b004867f-0.
INFO 03-02 01:29:48 [logger.py:42] Received request cmpl-71f39393b6594337a4fedc165ee27336-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:48 [async_llm.py:261] Added request cmpl-71f39393b6594337a4fedc165ee27336-0.
INFO 03-02 01:29:49 [logger.py:42] Received request cmpl-4d1d49f88e5f4d7a915dc91245e8c88d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:49 [async_llm.py:261] Added request cmpl-4d1d49f88e5f4d7a915dc91245e8c88d-0.
INFO 03-02 01:29:50 [logger.py:42] Received request cmpl-e0bb26ff91a6457cbdab5e1db34af106-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:50 [async_llm.py:261] Added request cmpl-e0bb26ff91a6457cbdab5e1db34af106-0.
INFO 03-02 01:29:51 [logger.py:42] Received request cmpl-604e1fa8c7af4d5ebc028c47ebe798e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:51 [async_llm.py:261] Added request cmpl-604e1fa8c7af4d5ebc028c47ebe798e4-0.
INFO 03-02 01:29:52 [logger.py:42] Received request cmpl-3c9a1452eec5474fb4e3ca49bdeaa46d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:52 [async_llm.py:261] Added request cmpl-3c9a1452eec5474fb4e3ca49bdeaa46d-0.
INFO 03-02 01:29:53 [logger.py:42] Received request cmpl-78262a53af8243aa858138817f840e35-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:53 [async_llm.py:261] Added request cmpl-78262a53af8243aa858138817f840e35-0.
INFO 03-02 01:29:54 [logger.py:42] Received request cmpl-932b0beba8ef45c592aafb6bb2f0f379-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:54 [async_llm.py:261] Added request cmpl-932b0beba8ef45c592aafb6bb2f0f379-0.
INFO 03-02 01:29:55 [logger.py:42] Received request cmpl-e621b19e8c784b5a91fdcb73f45e7478-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:55 [async_llm.py:261] Added request cmpl-e621b19e8c784b5a91fdcb73f45e7478-0.
INFO 03-02 01:29:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:29:56 [logger.py:42] Received request cmpl-40b9af67cf5d4c8eb5fca99ace4d806e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:56 [async_llm.py:261] Added request cmpl-40b9af67cf5d4c8eb5fca99ace4d806e-0.
INFO 03-02 01:29:57 [logger.py:42] Received request cmpl-4f9c7bfe3851441ebd5c64cd2377b9ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:57 [async_llm.py:261] Added request cmpl-4f9c7bfe3851441ebd5c64cd2377b9ee-0.
INFO 03-02 01:29:58 [logger.py:42] Received request cmpl-5ad5ee1ec8d64de9b59f634758d0eb97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:29:58 [async_llm.py:261] Added request cmpl-5ad5ee1ec8d64de9b59f634758d0eb97-0.
INFO 03-02 01:30:00 [logger.py:42] Received request cmpl-b03e8614fabb47c086f698ec295162b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:00 [async_llm.py:261] Added request cmpl-b03e8614fabb47c086f698ec295162b7-0.
INFO 03-02 01:30:01 [logger.py:42] Received request cmpl-7589d604c2434ee59a6f07cb4b84ea5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:01 [async_llm.py:261] Added request cmpl-7589d604c2434ee59a6f07cb4b84ea5a-0.
INFO 03-02 01:30:02 [logger.py:42] Received request cmpl-857359b78a14441c8a4b1478f3469eba-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:02 [async_llm.py:261] Added request cmpl-857359b78a14441c8a4b1478f3469eba-0.
INFO 03-02 01:30:03 [logger.py:42] Received request cmpl-bb822c03f8f54b9c8d6b6f7e89b5f69f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:03 [async_llm.py:261] Added request cmpl-bb822c03f8f54b9c8d6b6f7e89b5f69f-0.
INFO 03-02 01:30:04 [logger.py:42] Received request cmpl-16a579284095416b9c178d67ca42f58d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:04 [async_llm.py:261] Added request cmpl-16a579284095416b9c178d67ca42f58d-0.
INFO 03-02 01:30:05 [logger.py:42] Received request cmpl-8d09e943ba014a538f42f9a9bafecee8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:05 [async_llm.py:261] Added request cmpl-8d09e943ba014a538f42f9a9bafecee8-0.
INFO 03-02 01:30:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:06 [logger.py:42] Received request cmpl-b809496de67d4c0b891313edff736e08-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:06 [async_llm.py:261] Added request cmpl-b809496de67d4c0b891313edff736e08-0.
INFO 03-02 01:30:07 [logger.py:42] Received request cmpl-39f9b494f4044ddb86748f755c45342b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:07 [async_llm.py:261] Added request cmpl-39f9b494f4044ddb86748f755c45342b-0.
INFO 03-02 01:30:08 [logger.py:42] Received request cmpl-b01eadab7da14cc2a71fea3c3904190c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:08 [async_llm.py:261] Added request cmpl-b01eadab7da14cc2a71fea3c3904190c-0.
INFO 03-02 01:30:09 [logger.py:42] Received request cmpl-3be5e4367a5b4294bdb7a3903a3e4856-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:09 [async_llm.py:261] Added request cmpl-3be5e4367a5b4294bdb7a3903a3e4856-0.
INFO 03-02 01:30:11 [logger.py:42] Received request cmpl-a2da31fd895646febfcff963c93072ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:11 [async_llm.py:261] Added request cmpl-a2da31fd895646febfcff963c93072ab-0.
INFO 03-02 01:30:12 [logger.py:42] Received request cmpl-d74507e8176a4cb386e4f99d2f54ba89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:12 [async_llm.py:261] Added request cmpl-d74507e8176a4cb386e4f99d2f54ba89-0.
INFO 03-02 01:30:13 [logger.py:42] Received request cmpl-90ea18c62bc44b03b6c11c806a026bbc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:13 [async_llm.py:261] Added request cmpl-90ea18c62bc44b03b6c11c806a026bbc-0.
INFO 03-02 01:30:14 [logger.py:42] Received request cmpl-c9ffc35206f94b91a54204ca38b77940-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:14 [async_llm.py:261] Added request cmpl-c9ffc35206f94b91a54204ca38b77940-0.
INFO 03-02 01:30:15 [logger.py:42] Received request cmpl-cf580c0c5e3e4eb9b6d62c16c93602a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:15 [async_llm.py:261] Added request cmpl-cf580c0c5e3e4eb9b6d62c16c93602a7-0.
INFO 03-02 01:30:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:16 [logger.py:42] Received request cmpl-e4b08468a6864cbd8ed8e646691a1474-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:16 [async_llm.py:261] Added request cmpl-e4b08468a6864cbd8ed8e646691a1474-0.
INFO 03-02 01:30:17 [logger.py:42] Received request cmpl-a6c48896897243b798a73ec0f9522ddb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:17 [async_llm.py:261] Added request cmpl-a6c48896897243b798a73ec0f9522ddb-0.
INFO 03-02 01:30:18 [logger.py:42] Received request cmpl-12976f23d63541649e0a718bba5d3710-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:18 [async_llm.py:261] Added request cmpl-12976f23d63541649e0a718bba5d3710-0.
INFO 03-02 01:30:19 [logger.py:42] Received request cmpl-45661231218244aa8b95c4a3a7b51115-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:19 [async_llm.py:261] Added request cmpl-45661231218244aa8b95c4a3a7b51115-0.
INFO 03-02 01:30:20 [logger.py:42] Received request cmpl-fa7a97ca7489495186ecaa1f08049c5f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:20 [async_llm.py:261] Added request cmpl-fa7a97ca7489495186ecaa1f08049c5f-0.
INFO 03-02 01:30:22 [logger.py:42] Received request cmpl-37acbcbb803244cfacd7d6f641967701-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:22 [async_llm.py:261] Added request cmpl-37acbcbb803244cfacd7d6f641967701-0.
INFO 03-02 01:30:23 [logger.py:42] Received request cmpl-a6cca96e4d4e4557ade0df486a4c2e30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:23 [async_llm.py:261] Added request cmpl-a6cca96e4d4e4557ade0df486a4c2e30-0.
INFO 03-02 01:30:24 [logger.py:42] Received request cmpl-c44248a81b29443982ea57171a0dff5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:24 [async_llm.py:261] Added request cmpl-c44248a81b29443982ea57171a0dff5d-0.
INFO 03-02 01:30:25 [logger.py:42] Received request cmpl-67db553439e043ddb282cd0718d0131f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:25 [async_llm.py:261] Added request cmpl-67db553439e043ddb282cd0718d0131f-0.
INFO 03-02 01:30:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:26 [logger.py:42] Received request cmpl-b2d86efc14404d4f90e8cc5aedbe00a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:26 [async_llm.py:261] Added request cmpl-b2d86efc14404d4f90e8cc5aedbe00a4-0.
INFO 03-02 01:30:27 [logger.py:42] Received request cmpl-f75f5d0a942e4ec2b99f7ccc48a017de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:27 [async_llm.py:261] Added request cmpl-f75f5d0a942e4ec2b99f7ccc48a017de-0.
INFO 03-02 01:30:28 [logger.py:42] Received request cmpl-e5f1aca7b1d84af6b5fd3adeb0a06eae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:28 [async_llm.py:261] Added request cmpl-e5f1aca7b1d84af6b5fd3adeb0a06eae-0.
INFO 03-02 01:30:29 [logger.py:42] Received request cmpl-b4ccdc4913b240cf931f9609015086eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:29 [async_llm.py:261] Added request cmpl-b4ccdc4913b240cf931f9609015086eb-0.
INFO 03-02 01:30:30 [logger.py:42] Received request cmpl-9c95ca17aaf24aeca155adc8f74fee0b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:30 [async_llm.py:261] Added request cmpl-9c95ca17aaf24aeca155adc8f74fee0b-0.
INFO 03-02 01:30:31 [logger.py:42] Received request cmpl-0e61186367144d15a44b669e8f6d3023-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:31 [async_llm.py:261] Added request cmpl-0e61186367144d15a44b669e8f6d3023-0.
INFO 03-02 01:30:32 [logger.py:42] Received request cmpl-29d4121d30424fb9aeb2ec70372ff5d9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:32 [async_llm.py:261] Added request cmpl-29d4121d30424fb9aeb2ec70372ff5d9-0.
INFO 03-02 01:30:34 [logger.py:42] Received request cmpl-18802cdf3e6b4a2abfd0d83f95e1737d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:34 [async_llm.py:261] Added request cmpl-18802cdf3e6b4a2abfd0d83f95e1737d-0.
INFO 03-02 01:30:35 [logger.py:42] Received request cmpl-1b3766027daa4e14b1a5b6e5ca6334c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:35 [async_llm.py:261] Added request cmpl-1b3766027daa4e14b1a5b6e5ca6334c3-0.
INFO 03-02 01:30:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:36 [logger.py:42] Received request cmpl-5b8a3ff273c94addbc3aae31b0104e0c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:36 [async_llm.py:261] Added request cmpl-5b8a3ff273c94addbc3aae31b0104e0c-0.
INFO 03-02 01:30:37 [logger.py:42] Received request cmpl-af571de14dca44b6815dd6d4484efaa2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:37 [async_llm.py:261] Added request cmpl-af571de14dca44b6815dd6d4484efaa2-0.
INFO 03-02 01:30:38 [logger.py:42] Received request cmpl-ecba7a2164e24602a4dbbb180e851579-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:38 [async_llm.py:261] Added request cmpl-ecba7a2164e24602a4dbbb180e851579-0.
INFO 03-02 01:30:39 [logger.py:42] Received request cmpl-87b159a764c6458e9a5ff7918dbdcdcb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:39 [async_llm.py:261] Added request cmpl-87b159a764c6458e9a5ff7918dbdcdcb-0.
INFO 03-02 01:30:40 [logger.py:42] Received request cmpl-96b0c5220f8d4d098c23737b244f0615-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:40 [async_llm.py:261] Added request cmpl-96b0c5220f8d4d098c23737b244f0615-0.
INFO 03-02 01:30:41 [logger.py:42] Received request cmpl-a0bde9e12a25468f86b76a0858f26135-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:41 [async_llm.py:261] Added request cmpl-a0bde9e12a25468f86b76a0858f26135-0.
INFO 03-02 01:30:42 [logger.py:42] Received request cmpl-6278414be59e4837b7d7a8d7a26d8210-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:42 [async_llm.py:261] Added request cmpl-6278414be59e4837b7d7a8d7a26d8210-0.
INFO 03-02 01:30:43 [logger.py:42] Received request cmpl-f34c190f6b62447094926518e563a3dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:43 [async_llm.py:261] Added request cmpl-f34c190f6b62447094926518e563a3dd-0.
INFO 03-02 01:30:45 [logger.py:42] Received request cmpl-5346c2d4baa44208964695d802eecffe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:45 [async_llm.py:261] Added request cmpl-5346c2d4baa44208964695d802eecffe-0.
INFO 03-02 01:30:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:46 [logger.py:42] Received request cmpl-f8443b81e059446eada251e9fddb272a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:46 [async_llm.py:261] Added request cmpl-f8443b81e059446eada251e9fddb272a-0.
INFO 03-02 01:30:47 [logger.py:42] Received request cmpl-0cabe291547947c0840843a3df8ac1a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:47 [async_llm.py:261] Added request cmpl-0cabe291547947c0840843a3df8ac1a4-0.
INFO 03-02 01:30:48 [logger.py:42] Received request cmpl-eb9db0a9d3f649c7b5e7c9bfd44bd376-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:48 [async_llm.py:261] Added request cmpl-eb9db0a9d3f649c7b5e7c9bfd44bd376-0.
INFO 03-02 01:30:49 [logger.py:42] Received request cmpl-bb876f41a46845e09f583d3d5e34a80b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:49 [async_llm.py:261] Added request cmpl-bb876f41a46845e09f583d3d5e34a80b-0.
INFO 03-02 01:30:50 [logger.py:42] Received request cmpl-8fe2ab934bd641de97bde4319967f1f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:50 [async_llm.py:261] Added request cmpl-8fe2ab934bd641de97bde4319967f1f6-0.
INFO 03-02 01:30:51 [logger.py:42] Received request cmpl-061733baeb534b8e87e0520ebcc23563-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:51 [async_llm.py:261] Added request cmpl-061733baeb534b8e87e0520ebcc23563-0.
INFO 03-02 01:30:52 [logger.py:42] Received request cmpl-9ef1cb729a3a467dbaf1d508f425faf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:52 [async_llm.py:261] Added request cmpl-9ef1cb729a3a467dbaf1d508f425faf7-0.
INFO 03-02 01:30:53 [logger.py:42] Received request cmpl-727ff839c84a4e119df3a31577f31367-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:53 [async_llm.py:261] Added request cmpl-727ff839c84a4e119df3a31577f31367-0.
INFO 03-02 01:30:54 [logger.py:42] Received request cmpl-4cc0d5b22afb4e93b110509a5c8fcb25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:54 [async_llm.py:261] Added request cmpl-4cc0d5b22afb4e93b110509a5c8fcb25-0.
INFO 03-02 01:30:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:30:55 [logger.py:42] Received request cmpl-96e7cfc6afb34ffc8235813bd256b918-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:55 [async_llm.py:261] Added request cmpl-96e7cfc6afb34ffc8235813bd256b918-0.
INFO 03-02 01:30:57 [logger.py:42] Received request cmpl-f434f4f7430047bfbede8ae23ced22f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:57 [async_llm.py:261] Added request cmpl-f434f4f7430047bfbede8ae23ced22f0-0.
INFO 03-02 01:30:58 [logger.py:42] Received request cmpl-902cc764588b4e1699cfbd94eb62b853-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:58 [async_llm.py:261] Added request cmpl-902cc764588b4e1699cfbd94eb62b853-0.
INFO 03-02 01:30:59 [logger.py:42] Received request cmpl-d7e258edfd0b4e7d92874ce5e6f26e80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:30:59 [async_llm.py:261] Added request cmpl-d7e258edfd0b4e7d92874ce5e6f26e80-0.
INFO 03-02 01:31:00 [logger.py:42] Received request cmpl-8ad8cd26bd6f4d73841a1dd5bec5f419-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:00 [async_llm.py:261] Added request cmpl-8ad8cd26bd6f4d73841a1dd5bec5f419-0.
INFO 03-02 01:31:01 [logger.py:42] Received request cmpl-544e1dc74f74463fab99a85c49305a2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:01 [async_llm.py:261] Added request cmpl-544e1dc74f74463fab99a85c49305a2a-0.
INFO 03-02 01:31:02 [logger.py:42] Received request cmpl-a9ceb1e330304f07b719fa2fafe96950-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:02 [async_llm.py:261] Added request cmpl-a9ceb1e330304f07b719fa2fafe96950-0.
INFO 03-02 01:31:03 [logger.py:42] Received request cmpl-3dd61ae7453e4977ad967976ed0acb93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:03 [async_llm.py:261] Added request cmpl-3dd61ae7453e4977ad967976ed0acb93-0.
INFO 03-02 01:31:04 [logger.py:42] Received request cmpl-50132c5903754184b625dab61e4ebbda-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:04 [async_llm.py:261] Added request cmpl-50132c5903754184b625dab61e4ebbda-0.
INFO 03-02 01:31:05 [logger.py:42] Received request cmpl-d63164a95957417db69694ff98e231b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:05 [async_llm.py:261] Added request cmpl-d63164a95957417db69694ff98e231b3-0.
INFO 03-02 01:31:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:06 [logger.py:42] Received request cmpl-d7016ded181743fc91ef17a2fe7570a7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:06 [async_llm.py:261] Added request cmpl-d7016ded181743fc91ef17a2fe7570a7-0.
INFO 03-02 01:31:08 [logger.py:42] Received request cmpl-d801d632c90f44dba7107e549c36372b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:08 [async_llm.py:261] Added request cmpl-d801d632c90f44dba7107e549c36372b-0.
INFO 03-02 01:31:09 [logger.py:42] Received request cmpl-2e166d593bd34ad3a9f7bf788bfdf888-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:09 [async_llm.py:261] Added request cmpl-2e166d593bd34ad3a9f7bf788bfdf888-0.
INFO 03-02 01:31:10 [logger.py:42] Received request cmpl-465e00b38bae4d0e8b342a460eef545a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:10 [async_llm.py:261] Added request cmpl-465e00b38bae4d0e8b342a460eef545a-0.
INFO 03-02 01:31:11 [logger.py:42] Received request cmpl-6088292370ce4963a83449890f7b9371-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:11 [async_llm.py:261] Added request cmpl-6088292370ce4963a83449890f7b9371-0.
INFO 03-02 01:31:12 [logger.py:42] Received request cmpl-1058ea5f633f4959a8cbe9443e0f180a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:12 [async_llm.py:261] Added request cmpl-1058ea5f633f4959a8cbe9443e0f180a-0.
INFO 03-02 01:31:13 [logger.py:42] Received request cmpl-694466a384a1417c8ecf39ae7336b692-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:13 [async_llm.py:261] Added request cmpl-694466a384a1417c8ecf39ae7336b692-0.
INFO 03-02 01:31:14 [logger.py:42] Received request cmpl-59925b908d4d4ef9b83786e620889d4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:14 [async_llm.py:261] Added request cmpl-59925b908d4d4ef9b83786e620889d4c-0.
INFO 03-02 01:31:15 [logger.py:42] Received request cmpl-a14f0a515c214f7e917fb527d62fa1c9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:15 [async_llm.py:261] Added request cmpl-a14f0a515c214f7e917fb527d62fa1c9-0.
INFO 03-02 01:31:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:16 [logger.py:42] Received request cmpl-6e11c8509e874517b938e594c9574a43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:16 [async_llm.py:261] Added request cmpl-6e11c8509e874517b938e594c9574a43-0.
INFO 03-02 01:31:17 [logger.py:42] Received request cmpl-b54fe7b39c0f413b8b910749bc34db25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:17 [async_llm.py:261] Added request cmpl-b54fe7b39c0f413b8b910749bc34db25-0.
INFO 03-02 01:31:18 [logger.py:42] Received request cmpl-bc71b9ace04041f58dc540bf98a55a43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:18 [async_llm.py:261] Added request cmpl-bc71b9ace04041f58dc540bf98a55a43-0.
INFO 03-02 01:31:20 [logger.py:42] Received request cmpl-569bff82820843898b6f317fa1e62d06-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:20 [async_llm.py:261] Added request cmpl-569bff82820843898b6f317fa1e62d06-0.
INFO 03-02 01:31:21 [logger.py:42] Received request cmpl-40a787428ad94531b1790e44e0320eb4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:21 [async_llm.py:261] Added request cmpl-40a787428ad94531b1790e44e0320eb4-0.
INFO 03-02 01:31:22 [logger.py:42] Received request cmpl-4fe2660ef2ed40ceaf9999b5d037aceb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:22 [async_llm.py:261] Added request cmpl-4fe2660ef2ed40ceaf9999b5d037aceb-0.
INFO 03-02 01:31:23 [logger.py:42] Received request cmpl-37cc5eb5064345b5abfc25a64fed05f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:23 [async_llm.py:261] Added request cmpl-37cc5eb5064345b5abfc25a64fed05f6-0.
INFO 03-02 01:31:24 [logger.py:42] Received request cmpl-d6e6bef1c48e49ba81cccaba6b5ac03e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:24 [async_llm.py:261] Added request cmpl-d6e6bef1c48e49ba81cccaba6b5ac03e-0.
INFO 03-02 01:31:25 [logger.py:42] Received request cmpl-995a47dfc60c45ae9182ec8f964dbd50-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:25 [async_llm.py:261] Added request cmpl-995a47dfc60c45ae9182ec8f964dbd50-0.
INFO 03-02 01:31:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:26 [logger.py:42] Received request cmpl-6d99d2a9727a43e198c34971646b845f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:26 [async_llm.py:261] Added request cmpl-6d99d2a9727a43e198c34971646b845f-0.
INFO 03-02 01:31:27 [logger.py:42] Received request cmpl-37bc390c36694e7d8f35e881d069f837-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:27 [async_llm.py:261] Added request cmpl-37bc390c36694e7d8f35e881d069f837-0.
INFO 03-02 01:31:28 [logger.py:42] Received request cmpl-ac5eeadea073400381f86ca3bb4cf1b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:28 [async_llm.py:261] Added request cmpl-ac5eeadea073400381f86ca3bb4cf1b8-0.
INFO 03-02 01:31:29 [logger.py:42] Received request cmpl-d011e1e797464594a48e12100ea29cfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:29 [async_llm.py:261] Added request cmpl-d011e1e797464594a48e12100ea29cfb-0.
INFO 03-02 01:31:31 [logger.py:42] Received request cmpl-c9004bdbd063450b90cf414cb6181b20-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:31 [async_llm.py:261] Added request cmpl-c9004bdbd063450b90cf414cb6181b20-0.
INFO 03-02 01:31:32 [logger.py:42] Received request cmpl-b2c568826ad94dabb692bd2e344a49be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:32 [async_llm.py:261] Added request cmpl-b2c568826ad94dabb692bd2e344a49be-0.
INFO 03-02 01:31:33 [logger.py:42] Received request cmpl-4fc4331b00274f348be71bb06fa73332-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:33 [async_llm.py:261] Added request cmpl-4fc4331b00274f348be71bb06fa73332-0.
INFO 03-02 01:31:34 [logger.py:42] Received request cmpl-cd857abc0b9d4b4f958cdbf9d36050fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:34 [async_llm.py:261] Added request cmpl-cd857abc0b9d4b4f958cdbf9d36050fb-0.
INFO 03-02 01:31:35 [logger.py:42] Received request cmpl-fb694cad146b46139bb65bb5a1a8b2f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:35 [async_llm.py:261] Added request cmpl-fb694cad146b46139bb65bb5a1a8b2f3-0.
INFO 03-02 01:31:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:36 [logger.py:42] Received request cmpl-3f3d08beca92423fb792cc3c7539d0c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:36 [async_llm.py:261] Added request cmpl-3f3d08beca92423fb792cc3c7539d0c6-0.
INFO 03-02 01:31:37 [logger.py:42] Received request cmpl-1c5fb04d33eb49d2af077c8d025727d1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:37 [async_llm.py:261] Added request cmpl-1c5fb04d33eb49d2af077c8d025727d1-0.
INFO 03-02 01:31:38 [logger.py:42] Received request cmpl-0dd578b1b2534c7ca1e47cbbaf4476f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:38 [async_llm.py:261] Added request cmpl-0dd578b1b2534c7ca1e47cbbaf4476f8-0.
INFO 03-02 01:31:39 [logger.py:42] Received request cmpl-de4fa6483f4c4a5db02aefad5b899a95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:39 [async_llm.py:261] Added request cmpl-de4fa6483f4c4a5db02aefad5b899a95-0.
INFO 03-02 01:31:40 [logger.py:42] Received request cmpl-030c22a9234e49478af90bd8c1ce94d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:40 [async_llm.py:261] Added request cmpl-030c22a9234e49478af90bd8c1ce94d2-0.
INFO 03-02 01:31:42 [logger.py:42] Received request cmpl-f66012ce852249779a023ec4920c80fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:42 [async_llm.py:261] Added request cmpl-f66012ce852249779a023ec4920c80fe-0.
INFO 03-02 01:31:43 [logger.py:42] Received request cmpl-00f910a6bfd74a1c911a2dfbd7afe7a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:43 [async_llm.py:261] Added request cmpl-00f910a6bfd74a1c911a2dfbd7afe7a1-0.
INFO 03-02 01:31:44 [logger.py:42] Received request cmpl-19b4a85908e64d508a57cfcce71b36bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:44 [async_llm.py:261] Added request cmpl-19b4a85908e64d508a57cfcce71b36bc-0.
INFO 03-02 01:31:45 [logger.py:42] Received request cmpl-7385465373f54b3cbfdfc85a58dd3f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:45 [async_llm.py:261] Added request cmpl-7385465373f54b3cbfdfc85a58dd3f25-0.
INFO 03-02 01:31:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:46 [logger.py:42] Received request cmpl-9c1344f9fc70441e8dba00ecca24f339-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:46 [async_llm.py:261] Added request cmpl-9c1344f9fc70441e8dba00ecca24f339-0.
INFO 03-02 01:31:47 [logger.py:42] Received request cmpl-d6b33e6515e44db091bde5df7c179d8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:47 [async_llm.py:261] Added request cmpl-d6b33e6515e44db091bde5df7c179d8b-0.
INFO 03-02 01:31:48 [logger.py:42] Received request cmpl-074f8a5bc51d4ba6be1b233506ce04cf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:48 [async_llm.py:261] Added request cmpl-074f8a5bc51d4ba6be1b233506ce04cf-0.
INFO 03-02 01:31:49 [logger.py:42] Received request cmpl-d7ff1cc1ccdb4ccbb2b58bdaba26b6c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:49 [async_llm.py:261] Added request cmpl-d7ff1cc1ccdb4ccbb2b58bdaba26b6c1-0.
INFO 03-02 01:31:50 [logger.py:42] Received request cmpl-e2a6502752c041d8808ce09a1e6d1e8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:50 [async_llm.py:261] Added request cmpl-e2a6502752c041d8808ce09a1e6d1e8e-0.
INFO 03-02 01:31:51 [logger.py:42] Received request cmpl-8afb19e3cef743d69b867cff07824025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:51 [async_llm.py:261] Added request cmpl-8afb19e3cef743d69b867cff07824025-0.
INFO 03-02 01:31:52 [logger.py:42] Received request cmpl-90bc597617434f68b189984f7195e1ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:52 [async_llm.py:261] Added request cmpl-90bc597617434f68b189984f7195e1ff-0.
INFO 03-02 01:31:54 [logger.py:42] Received request cmpl-75a434f364e64901a70632dd751c4761-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:54 [async_llm.py:261] Added request cmpl-75a434f364e64901a70632dd751c4761-0.
INFO 03-02 01:31:55 [logger.py:42] Received request cmpl-4f73208a134e48ccaa3f4d82b3ede582-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:55 [async_llm.py:261] Added request cmpl-4f73208a134e48ccaa3f4d82b3ede582-0.
INFO 03-02 01:31:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:31:56 [logger.py:42] Received request cmpl-4fe3366c56494b7a978856bfa44c4ccf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:56 [async_llm.py:261] Added request cmpl-4fe3366c56494b7a978856bfa44c4ccf-0.
INFO 03-02 01:31:57 [logger.py:42] Received request cmpl-d974322fbc084880baba8dfec3e0b015-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:57 [async_llm.py:261] Added request cmpl-d974322fbc084880baba8dfec3e0b015-0.
INFO 03-02 01:31:58 [logger.py:42] Received request cmpl-85df4031686e4da4904235f928351e57-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:58 [async_llm.py:261] Added request cmpl-85df4031686e4da4904235f928351e57-0.
INFO 03-02 01:31:59 [logger.py:42] Received request cmpl-630506ed9ed54217b93e7c1110ee224b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:31:59 [async_llm.py:261] Added request cmpl-630506ed9ed54217b93e7c1110ee224b-0.
INFO 03-02 01:32:00 [logger.py:42] Received request cmpl-731946ab16a342baa8c6c5e70442700b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:00 [async_llm.py:261] Added request cmpl-731946ab16a342baa8c6c5e70442700b-0.
INFO 03-02 01:32:01 [logger.py:42] Received request cmpl-3d012729f0c3427b9065eceee64fef81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:01 [async_llm.py:261] Added request cmpl-3d012729f0c3427b9065eceee64fef81-0.
INFO 03-02 01:32:02 [logger.py:42] Received request cmpl-d50ca27c4c82499a8b701cc2a4613d2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:02 [async_llm.py:261] Added request cmpl-d50ca27c4c82499a8b701cc2a4613d2e-0.
INFO 03-02 01:32:03 [logger.py:42] Received request cmpl-89e24089bacc4dacbc82f89c53b1dd00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:03 [async_llm.py:261] Added request cmpl-89e24089bacc4dacbc82f89c53b1dd00-0.
INFO 03-02 01:32:05 [logger.py:42] Received request cmpl-67a1a4b8594543f2931fdf492179c147-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:05 [async_llm.py:261] Added request cmpl-67a1a4b8594543f2931fdf492179c147-0.
INFO 03-02 01:32:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:06 [logger.py:42] Received request cmpl-f819807f490246baab2bb28c5fd8e71b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:06 [async_llm.py:261] Added request cmpl-f819807f490246baab2bb28c5fd8e71b-0.
INFO 03-02 01:32:07 [logger.py:42] Received request cmpl-40e2fcae9c0f4972b89b05c83807a851-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:07 [async_llm.py:261] Added request cmpl-40e2fcae9c0f4972b89b05c83807a851-0.
INFO 03-02 01:32:08 [logger.py:42] Received request cmpl-43d15fe8955f4aafa2843dc84ec5b023-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:08 [async_llm.py:261] Added request cmpl-43d15fe8955f4aafa2843dc84ec5b023-0.
INFO 03-02 01:32:09 [logger.py:42] Received request cmpl-f7e578e797564613914fa0fe1798f263-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:09 [async_llm.py:261] Added request cmpl-f7e578e797564613914fa0fe1798f263-0.
INFO 03-02 01:32:10 [logger.py:42] Received request cmpl-b25178688e5b457c8de4de4dce43aea9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:10 [async_llm.py:261] Added request cmpl-b25178688e5b457c8de4de4dce43aea9-0.
INFO 03-02 01:32:11 [logger.py:42] Received request cmpl-dfce145a49f34ac58635020aa14a99c6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:11 [async_llm.py:261] Added request cmpl-dfce145a49f34ac58635020aa14a99c6-0.
INFO 03-02 01:32:12 [logger.py:42] Received request cmpl-8c6657f1dcec46fdaf22ef2704067e6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:12 [async_llm.py:261] Added request cmpl-8c6657f1dcec46fdaf22ef2704067e6d-0.
INFO 03-02 01:32:13 [logger.py:42] Received request cmpl-d356c2f7b95f40aa81059f3c3e875da0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:13 [async_llm.py:261] Added request cmpl-d356c2f7b95f40aa81059f3c3e875da0-0.
INFO 03-02 01:32:14 [logger.py:42] Received request cmpl-db10ecbf094742eb83a6f30b9671277d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:14 [async_llm.py:261] Added request cmpl-db10ecbf094742eb83a6f30b9671277d-0.
INFO 03-02 01:32:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:15 [logger.py:42] Received request cmpl-ddce8cb8e0844003919c54b004a9e5b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:16 [async_llm.py:261] Added request cmpl-ddce8cb8e0844003919c54b004a9e5b9-0.
INFO 03-02 01:32:17 [logger.py:42] Received request cmpl-d5733581a4594e47888ae34c1eb6e6c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:17 [async_llm.py:261] Added request cmpl-d5733581a4594e47888ae34c1eb6e6c4-0.
INFO 03-02 01:32:18 [logger.py:42] Received request cmpl-cdf5df2b0922466bad419f0daf4d6d3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:18 [async_llm.py:261] Added request cmpl-cdf5df2b0922466bad419f0daf4d6d3f-0.
INFO 03-02 01:32:19 [logger.py:42] Received request cmpl-1dc33c2319574884af65401000eb8673-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:19 [async_llm.py:261] Added request cmpl-1dc33c2319574884af65401000eb8673-0.
INFO 03-02 01:32:20 [logger.py:42] Received request cmpl-a5fde7c0b94c4c48814777ac7aac83ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:20 [async_llm.py:261] Added request cmpl-a5fde7c0b94c4c48814777ac7aac83ae-0.
INFO 03-02 01:32:21 [logger.py:42] Received request cmpl-77980d5b5d814e4c929f8c09c201d6b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:21 [async_llm.py:261] Added request cmpl-77980d5b5d814e4c929f8c09c201d6b1-0.
INFO 03-02 01:32:22 [logger.py:42] Received request cmpl-267ee9b3999e49a581a050957b0a5172-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:22 [async_llm.py:261] Added request cmpl-267ee9b3999e49a581a050957b0a5172-0.
INFO 03-02 01:32:23 [logger.py:42] Received request cmpl-6288a8990a444ada83a41c4244ac2f37-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:23 [async_llm.py:261] Added request cmpl-6288a8990a444ada83a41c4244ac2f37-0.
INFO 03-02 01:32:24 [logger.py:42] Received request cmpl-10418c42728c40faaf6b481006ebb95a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:24 [async_llm.py:261] Added request cmpl-10418c42728c40faaf6b481006ebb95a-0.
INFO 03-02 01:32:25 [logger.py:42] Received request cmpl-2bbe71c6027c4071bd2388f612dfc175-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:25 [async_llm.py:261] Added request cmpl-2bbe71c6027c4071bd2388f612dfc175-0.
INFO 03-02 01:32:25 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:26 [logger.py:42] Received request cmpl-64cc8cbe9fea4d6595d4fc96a88201fb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:26 [async_llm.py:261] Added request cmpl-64cc8cbe9fea4d6595d4fc96a88201fb-0.
INFO 03-02 01:32:28 [logger.py:42] Received request cmpl-c2d714c18f834efd89333b5b2fd6cfb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:28 [async_llm.py:261] Added request cmpl-c2d714c18f834efd89333b5b2fd6cfb6-0.
INFO 03-02 01:32:29 [logger.py:42] Received request cmpl-c5844d06993c4a02aaa4d1bc3ff054db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:29 [async_llm.py:261] Added request cmpl-c5844d06993c4a02aaa4d1bc3ff054db-0.
INFO 03-02 01:32:30 [logger.py:42] Received request cmpl-914c3cfd72374249b8c51dd2543cc283-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:30 [async_llm.py:261] Added request cmpl-914c3cfd72374249b8c51dd2543cc283-0.
INFO 03-02 01:32:31 [logger.py:42] Received request cmpl-da318b5db40446c181ca96ae25bf5778-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:31 [async_llm.py:261] Added request cmpl-da318b5db40446c181ca96ae25bf5778-0.
INFO 03-02 01:32:32 [logger.py:42] Received request cmpl-1f328d8b81bd4e74a573b5a4410f18c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:32 [async_llm.py:261] Added request cmpl-1f328d8b81bd4e74a573b5a4410f18c7-0.
INFO 03-02 01:32:33 [logger.py:42] Received request cmpl-f063cc6be61141bfab34f3db583548f4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:33 [async_llm.py:261] Added request cmpl-f063cc6be61141bfab34f3db583548f4-0.
INFO 03-02 01:32:34 [logger.py:42] Received request cmpl-2d16090b90a841a98ba2efe52717c629-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:34 [async_llm.py:261] Added request cmpl-2d16090b90a841a98ba2efe52717c629-0.
INFO 03-02 01:32:35 [logger.py:42] Received request cmpl-7c58304b56ae435683dfda117df48efa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:35 [async_llm.py:261] Added request cmpl-7c58304b56ae435683dfda117df48efa-0.
INFO 03-02 01:32:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:36 [logger.py:42] Received request cmpl-8995fd6614c14c4fbe18068d326e981b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:36 [async_llm.py:261] Added request cmpl-8995fd6614c14c4fbe18068d326e981b-0.
INFO 03-02 01:32:37 [logger.py:42] Received request cmpl-48f0107427554d709c0103ff7d0b79e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:37 [async_llm.py:261] Added request cmpl-48f0107427554d709c0103ff7d0b79e8-0.
INFO 03-02 01:32:39 [logger.py:42] Received request cmpl-7bfcfb29ea574ad79f68c34466a97bbb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:39 [async_llm.py:261] Added request cmpl-7bfcfb29ea574ad79f68c34466a97bbb-0.
INFO 03-02 01:32:40 [logger.py:42] Received request cmpl-ca68605911a744f58b96f59f4eb32f5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:40 [async_llm.py:261] Added request cmpl-ca68605911a744f58b96f59f4eb32f5a-0.
INFO 03-02 01:32:41 [logger.py:42] Received request cmpl-c6dfae20b8a240f994aefff37ae3270e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:41 [async_llm.py:261] Added request cmpl-c6dfae20b8a240f994aefff37ae3270e-0.
INFO 03-02 01:32:42 [logger.py:42] Received request cmpl-fcff22c270df40f58ac372fa158198dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:42 [async_llm.py:261] Added request cmpl-fcff22c270df40f58ac372fa158198dd-0.
INFO 03-02 01:32:43 [logger.py:42] Received request cmpl-26200d8a738b498fa50a6f8ba03a5eb2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:43 [async_llm.py:261] Added request cmpl-26200d8a738b498fa50a6f8ba03a5eb2-0.
INFO 03-02 01:32:44 [logger.py:42] Received request cmpl-4e33b8655f5249bead80e862802ac2d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:44 [async_llm.py:261] Added request cmpl-4e33b8655f5249bead80e862802ac2d8-0.
INFO 03-02 01:32:45 [logger.py:42] Received request cmpl-c6340361a3e1461d87eda773cb4fb505-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:45 [async_llm.py:261] Added request cmpl-c6340361a3e1461d87eda773cb4fb505-0.
INFO 03-02 01:32:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:46 [logger.py:42] Received request cmpl-02035ce6ebbe4156892baba914aad9fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:46 [async_llm.py:261] Added request cmpl-02035ce6ebbe4156892baba914aad9fe-0.
INFO 03-02 01:32:47 [logger.py:42] Received request cmpl-7253ac2d855d47cdb0cc4b0e495fa1de-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:47 [async_llm.py:261] Added request cmpl-7253ac2d855d47cdb0cc4b0e495fa1de-0.
INFO 03-02 01:32:48 [logger.py:42] Received request cmpl-600bf39705c54e748f07135e8430a895-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:48 [async_llm.py:261] Added request cmpl-600bf39705c54e748f07135e8430a895-0.
INFO 03-02 01:32:49 [logger.py:42] Received request cmpl-6a7b380253d945f28369e4f8155df733-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:49 [async_llm.py:261] Added request cmpl-6a7b380253d945f28369e4f8155df733-0.
INFO 03-02 01:32:51 [logger.py:42] Received request cmpl-2ff17fb3429b4e56be07e8f01d062f11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:51 [async_llm.py:261] Added request cmpl-2ff17fb3429b4e56be07e8f01d062f11-0.
INFO 03-02 01:32:52 [logger.py:42] Received request cmpl-51c58dc0739945208a9bac5d0bb2afe0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:52 [async_llm.py:261] Added request cmpl-51c58dc0739945208a9bac5d0bb2afe0-0.
INFO 03-02 01:32:53 [logger.py:42] Received request cmpl-19a2e3b6b5294a1c8638712bbc3f2a3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:53 [async_llm.py:261] Added request cmpl-19a2e3b6b5294a1c8638712bbc3f2a3d-0.
INFO 03-02 01:32:54 [logger.py:42] Received request cmpl-00e0c2672ec04404a6f8d11603f9a709-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:54 [async_llm.py:261] Added request cmpl-00e0c2672ec04404a6f8d11603f9a709-0.
INFO 03-02 01:32:55 [logger.py:42] Received request cmpl-8bdae3c7cdd14d49a198f34b6d9f23a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:55 [async_llm.py:261] Added request cmpl-8bdae3c7cdd14d49a198f34b6d9f23a2-0.
INFO 03-02 01:32:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:32:56 [logger.py:42] Received request cmpl-29340cf62ef748ba8782c0a204beae13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:56 [async_llm.py:261] Added request cmpl-29340cf62ef748ba8782c0a204beae13-0.
INFO 03-02 01:32:57 [logger.py:42] Received request cmpl-a49238e06ace4225a5e0adb1b653e6e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:57 [async_llm.py:261] Added request cmpl-a49238e06ace4225a5e0adb1b653e6e9-0.
INFO 03-02 01:32:58 [logger.py:42] Received request cmpl-d8ba50eca24c4520ad035478848c11b9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:58 [async_llm.py:261] Added request cmpl-d8ba50eca24c4520ad035478848c11b9-0.
INFO 03-02 01:32:59 [logger.py:42] Received request cmpl-11cf5548cf9b42d9a5b313f79ee1ba8b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:32:59 [async_llm.py:261] Added request cmpl-11cf5548cf9b42d9a5b313f79ee1ba8b-0.
INFO 03-02 01:33:00 [logger.py:42] Received request cmpl-8280d242e71c4980bb070a67e34464f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:00 [async_llm.py:261] Added request cmpl-8280d242e71c4980bb070a67e34464f6-0.
INFO 03-02 01:33:02 [logger.py:42] Received request cmpl-d5792db5ad114256892616933b660779-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:02 [async_llm.py:261] Added request cmpl-d5792db5ad114256892616933b660779-0.
INFO 03-02 01:33:03 [logger.py:42] Received request cmpl-ccbce66030074536b17428177370edd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:03 [async_llm.py:261] Added request cmpl-ccbce66030074536b17428177370edd5-0.
INFO 03-02 01:33:04 [logger.py:42] Received request cmpl-bc7f5b6a90674049af083bce182bff17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:04 [async_llm.py:261] Added request cmpl-bc7f5b6a90674049af083bce182bff17-0.
INFO 03-02 01:33:05 [logger.py:42] Received request cmpl-88dee6cf5b914bd280d1ff92ab3bb6d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:05 [async_llm.py:261] Added request cmpl-88dee6cf5b914bd280d1ff92ab3bb6d6-0.
INFO 03-02 01:33:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:06 [logger.py:42] Received request cmpl-b6961f634cdd4d22910c1f72f3c994bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:06 [async_llm.py:261] Added request cmpl-b6961f634cdd4d22910c1f72f3c994bc-0.
INFO 03-02 01:33:07 [logger.py:42] Received request cmpl-0dfb399a007d4e6d9cb701cb8dbfefcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:07 [async_llm.py:261] Added request cmpl-0dfb399a007d4e6d9cb701cb8dbfefcd-0.
INFO 03-02 01:33:08 [logger.py:42] Received request cmpl-3d9910699c7e4a20856e00629de189e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:08 [async_llm.py:261] Added request cmpl-3d9910699c7e4a20856e00629de189e7-0.
INFO 03-02 01:33:09 [logger.py:42] Received request cmpl-5bd4e7982b2e4d519e9232724d179f25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:09 [async_llm.py:261] Added request cmpl-5bd4e7982b2e4d519e9232724d179f25-0.
INFO 03-02 01:33:10 [logger.py:42] Received request cmpl-061e80c6557941adaba7dee0bc30f47c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:10 [async_llm.py:261] Added request cmpl-061e80c6557941adaba7dee0bc30f47c-0.
INFO 03-02 01:33:11 [logger.py:42] Received request cmpl-cb4fc5793775465885f88db9f417c718-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:11 [async_llm.py:261] Added request cmpl-cb4fc5793775465885f88db9f417c718-0.
INFO 03-02 01:33:13 [logger.py:42] Received request cmpl-0639be5102f247c080afaf44907e3666-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:13 [async_llm.py:261] Added request cmpl-0639be5102f247c080afaf44907e3666-0.
INFO 03-02 01:33:14 [logger.py:42] Received request cmpl-354fb34c86ff4bff86cb5050f3e731c0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:14 [async_llm.py:261] Added request cmpl-354fb34c86ff4bff86cb5050f3e731c0-0.
INFO 03-02 01:33:15 [logger.py:42] Received request cmpl-586fee5bd65546c6851a6ec356f5a525-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:15 [async_llm.py:261] Added request cmpl-586fee5bd65546c6851a6ec356f5a525-0.
INFO 03-02 01:33:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:16 [logger.py:42] Received request cmpl-7e35fa8332c9454b88eef6faf3edbdf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:16 [async_llm.py:261] Added request cmpl-7e35fa8332c9454b88eef6faf3edbdf3-0.
INFO 03-02 01:33:17 [logger.py:42] Received request cmpl-1a4de0637349419dad2768923627a526-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:17 [async_llm.py:261] Added request cmpl-1a4de0637349419dad2768923627a526-0.
INFO 03-02 01:33:18 [logger.py:42] Received request cmpl-d59cd7f81daf47f0a97ffacc53eafc2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:18 [async_llm.py:261] Added request cmpl-d59cd7f81daf47f0a97ffacc53eafc2d-0.
INFO 03-02 01:33:19 [logger.py:42] Received request cmpl-523680525eca47f996a99dcab6af591d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:19 [async_llm.py:261] Added request cmpl-523680525eca47f996a99dcab6af591d-0.
INFO 03-02 01:33:20 [logger.py:42] Received request cmpl-59b5b0b931a84d9f830696f08761cf64-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:20 [async_llm.py:261] Added request cmpl-59b5b0b931a84d9f830696f08761cf64-0.
INFO 03-02 01:33:21 [logger.py:42] Received request cmpl-a8d7577f4a334f3db3f69abf46c05238-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:21 [async_llm.py:261] Added request cmpl-a8d7577f4a334f3db3f69abf46c05238-0.
INFO 03-02 01:33:22 [logger.py:42] Received request cmpl-47c20c2b650b4ceca457f863bdd5f648-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:22 [async_llm.py:261] Added request cmpl-47c20c2b650b4ceca457f863bdd5f648-0.
INFO 03-02 01:33:23 [logger.py:42] Received request cmpl-2ea305960c4741f29dea368a36b5ffa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:23 [async_llm.py:261] Added request cmpl-2ea305960c4741f29dea368a36b5ffa1-0.
INFO 03-02 01:33:25 [logger.py:42] Received request cmpl-88c2fbb2855e47d584ebdbe8c85f7baf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:25 [async_llm.py:261] Added request cmpl-88c2fbb2855e47d584ebdbe8c85f7baf-0.
INFO 03-02 01:33:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:26 [logger.py:42] Received request cmpl-40cfdf9b3f63416db76bbfea9d396921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:26 [async_llm.py:261] Added request cmpl-40cfdf9b3f63416db76bbfea9d396921-0.
INFO 03-02 01:33:27 [logger.py:42] Received request cmpl-f849345f53e4417689f8a2391b1d5df4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:27 [async_llm.py:261] Added request cmpl-f849345f53e4417689f8a2391b1d5df4-0.
INFO 03-02 01:33:28 [logger.py:42] Received request cmpl-bcb6a93cf9294af2a9345844c9cb3051-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:28 [async_llm.py:261] Added request cmpl-bcb6a93cf9294af2a9345844c9cb3051-0.
INFO 03-02 01:33:29 [logger.py:42] Received request cmpl-967d84891f86462ba1d3d7a998316685-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:29 [async_llm.py:261] Added request cmpl-967d84891f86462ba1d3d7a998316685-0.
INFO 03-02 01:33:30 [logger.py:42] Received request cmpl-25e0435366b548c4a6c88230187f2d31-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:30 [async_llm.py:261] Added request cmpl-25e0435366b548c4a6c88230187f2d31-0.
INFO 03-02 01:33:31 [logger.py:42] Received request cmpl-fcaf5b9595274155a64585a03f0e920d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:31 [async_llm.py:261] Added request cmpl-fcaf5b9595274155a64585a03f0e920d-0.
INFO 03-02 01:33:32 [logger.py:42] Received request cmpl-6a9a8e9b6f454b6c8ef9a309b94cde5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:32 [async_llm.py:261] Added request cmpl-6a9a8e9b6f454b6c8ef9a309b94cde5a-0.
INFO 03-02 01:33:33 [logger.py:42] Received request cmpl-577f3fea44fd4fca8d4f0110f3c91d16-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:33 [async_llm.py:261] Added request cmpl-577f3fea44fd4fca8d4f0110f3c91d16-0.
INFO 03-02 01:33:34 [logger.py:42] Received request cmpl-0c6e0832b4fb4281ba7ab6e4f2faa3b7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:34 [async_llm.py:261] Added request cmpl-0c6e0832b4fb4281ba7ab6e4f2faa3b7-0.
INFO 03-02 01:33:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:36 [logger.py:42] Received request cmpl-e6e76fe45b2b4a5dbe864acc4124c7e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:36 [async_llm.py:261] Added request cmpl-e6e76fe45b2b4a5dbe864acc4124c7e2-0.
INFO 03-02 01:33:37 [logger.py:42] Received request cmpl-9f58b985e707487089fe713c00b84021-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:37 [async_llm.py:261] Added request cmpl-9f58b985e707487089fe713c00b84021-0.
INFO 03-02 01:33:38 [logger.py:42] Received request cmpl-f79dea2e99d545cbb403066033ad652d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:38 [async_llm.py:261] Added request cmpl-f79dea2e99d545cbb403066033ad652d-0.
INFO 03-02 01:33:39 [logger.py:42] Received request cmpl-a366f94a53ed4e7589b0e3500e47b6cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:39 [async_llm.py:261] Added request cmpl-a366f94a53ed4e7589b0e3500e47b6cb-0.
INFO 03-02 01:33:40 [logger.py:42] Received request cmpl-f4dc2116de684ca7adf406ee346cea2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:40 [async_llm.py:261] Added request cmpl-f4dc2116de684ca7adf406ee346cea2f-0.
INFO 03-02 01:33:41 [logger.py:42] Received request cmpl-da2f45f776e6446c839c4663fbf205fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:41 [async_llm.py:261] Added request cmpl-da2f45f776e6446c839c4663fbf205fa-0.
INFO 03-02 01:33:42 [logger.py:42] Received request cmpl-741a58d24d7348188fc4ce15b77e18db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:42 [async_llm.py:261] Added request cmpl-741a58d24d7348188fc4ce15b77e18db-0.
INFO 03-02 01:33:43 [logger.py:42] Received request cmpl-9235dc9daab34836a3cef4ce3625e770-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:43 [async_llm.py:261] Added request cmpl-9235dc9daab34836a3cef4ce3625e770-0.
INFO 03-02 01:33:44 [logger.py:42] Received request cmpl-20795a0bc207478fb5b035da0d065974-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:44 [async_llm.py:261] Added request cmpl-20795a0bc207478fb5b035da0d065974-0.
INFO 03-02 01:33:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:45 [logger.py:42] Received request cmpl-d56e1374443a4eadb5cd39b6dcd7bbb5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:45 [async_llm.py:261] Added request cmpl-d56e1374443a4eadb5cd39b6dcd7bbb5-0.
INFO 03-02 01:33:47 [logger.py:42] Received request cmpl-3eb78fd7b6dc4389a7f1755006f348f3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:47 [async_llm.py:261] Added request cmpl-3eb78fd7b6dc4389a7f1755006f348f3-0.
INFO 03-02 01:33:48 [logger.py:42] Received request cmpl-0cedeb8150054f0e984ef6004f3ba9f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:48 [async_llm.py:261] Added request cmpl-0cedeb8150054f0e984ef6004f3ba9f7-0.
INFO 03-02 01:33:49 [logger.py:42] Received request cmpl-220392661cc4475eb294660a5f3920e8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:49 [async_llm.py:261] Added request cmpl-220392661cc4475eb294660a5f3920e8-0.
INFO 03-02 01:33:50 [logger.py:42] Received request cmpl-11fa4eeef4674ac4a8f2fb0d4b7e0e1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:50 [async_llm.py:261] Added request cmpl-11fa4eeef4674ac4a8f2fb0d4b7e0e1d-0.
INFO 03-02 01:33:51 [logger.py:42] Received request cmpl-edf183d796c94a3abd86364dbc23fdf3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:51 [async_llm.py:261] Added request cmpl-edf183d796c94a3abd86364dbc23fdf3-0.
INFO 03-02 01:33:52 [logger.py:42] Received request cmpl-b77686609e0b42bfbea0607571ce1c96-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:52 [async_llm.py:261] Added request cmpl-b77686609e0b42bfbea0607571ce1c96-0.
INFO 03-02 01:33:53 [logger.py:42] Received request cmpl-3355b89d7f7d40f996bf197fa2297b53-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:53 [async_llm.py:261] Added request cmpl-3355b89d7f7d40f996bf197fa2297b53-0.
INFO 03-02 01:33:54 [logger.py:42] Received request cmpl-d1a1672be46041e1b9f07657b36879dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:54 [async_llm.py:261] Added request cmpl-d1a1672be46041e1b9f07657b36879dc-0.
INFO 03-02 01:33:55 [logger.py:42] Received request cmpl-f2f80c7f9a3d4e89aa4941f132ec881b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:55 [async_llm.py:261] Added request cmpl-f2f80c7f9a3d4e89aa4941f132ec881b-0.
INFO 03-02 01:33:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:33:56 [logger.py:42] Received request cmpl-2cf41e08a4a949f0a842041114b63a3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:56 [async_llm.py:261] Added request cmpl-2cf41e08a4a949f0a842041114b63a3e-0.
INFO 03-02 01:33:57 [logger.py:42] Received request cmpl-7bebc7596a36439f890f851d52af0f46-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:57 [async_llm.py:261] Added request cmpl-7bebc7596a36439f890f851d52af0f46-0.
INFO 03-02 01:33:59 [logger.py:42] Received request cmpl-bd6c129badb941dfa812aaf3336b5222-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:33:59 [async_llm.py:261] Added request cmpl-bd6c129badb941dfa812aaf3336b5222-0.
INFO 03-02 01:34:00 [logger.py:42] Received request cmpl-943a92b7dda8445e86f078309ef4ffed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:00 [async_llm.py:261] Added request cmpl-943a92b7dda8445e86f078309ef4ffed-0.
INFO 03-02 01:34:01 [logger.py:42] Received request cmpl-a009f96906b348e99fd5311e1d19ec6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:01 [async_llm.py:261] Added request cmpl-a009f96906b348e99fd5311e1d19ec6a-0.
INFO 03-02 01:34:02 [logger.py:42] Received request cmpl-490d8038b58f4e90ab6a984729a66880-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:02 [async_llm.py:261] Added request cmpl-490d8038b58f4e90ab6a984729a66880-0.
INFO 03-02 01:34:03 [logger.py:42] Received request cmpl-870e655853cf48518edd472858be250b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:03 [async_llm.py:261] Added request cmpl-870e655853cf48518edd472858be250b-0.
INFO 03-02 01:34:04 [logger.py:42] Received request cmpl-9782961866db4fa48a72f67600166401-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:04 [async_llm.py:261] Added request cmpl-9782961866db4fa48a72f67600166401-0.
INFO 03-02 01:34:05 [logger.py:42] Received request cmpl-26cfcf8a93ae4609877e8506d632a73a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:05 [async_llm.py:261] Added request cmpl-26cfcf8a93ae4609877e8506d632a73a-0.
INFO 03-02 01:34:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:06 [logger.py:42] Received request cmpl-e3e1365b0dce49f78fa8e74acbe8e0ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:06 [async_llm.py:261] Added request cmpl-e3e1365b0dce49f78fa8e74acbe8e0ee-0.
INFO 03-02 01:34:07 [logger.py:42] Received request cmpl-46c5bbcc72264f2aab1fb69609396136-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:07 [async_llm.py:261] Added request cmpl-46c5bbcc72264f2aab1fb69609396136-0.
INFO 03-02 01:34:08 [logger.py:42] Received request cmpl-c1cb1d89994d4e8ba0e90a6e61b150fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:08 [async_llm.py:261] Added request cmpl-c1cb1d89994d4e8ba0e90a6e61b150fe-0.
INFO 03-02 01:34:10 [logger.py:42] Received request cmpl-06b04392778e4d3a99ea12c551acfb00-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:10 [async_llm.py:261] Added request cmpl-06b04392778e4d3a99ea12c551acfb00-0.
INFO 03-02 01:34:11 [logger.py:42] Received request cmpl-8cec0489a0934fe79f64f6a9b4614150-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:11 [async_llm.py:261] Added request cmpl-8cec0489a0934fe79f64f6a9b4614150-0.
INFO 03-02 01:34:12 [logger.py:42] Received request cmpl-e214e3f26fc14945afd90e35343c6f81-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:12 [async_llm.py:261] Added request cmpl-e214e3f26fc14945afd90e35343c6f81-0.
INFO 03-02 01:34:13 [logger.py:42] Received request cmpl-bb705b6dd8fc4a859eb8e5b44e61d95d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:13 [async_llm.py:261] Added request cmpl-bb705b6dd8fc4a859eb8e5b44e61d95d-0.
INFO 03-02 01:34:14 [logger.py:42] Received request cmpl-085f5eae5aa941e1970057f1b34c5e04-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:14 [async_llm.py:261] Added request cmpl-085f5eae5aa941e1970057f1b34c5e04-0.
INFO 03-02 01:34:15 [logger.py:42] Received request cmpl-3d5b68096eb04465b45c7bafc747f888-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:15 [async_llm.py:261] Added request cmpl-3d5b68096eb04465b45c7bafc747f888-0.
INFO 03-02 01:34:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:16 [logger.py:42] Received request cmpl-ed06df33bd0d4774bcc5427c400f6dfe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:16 [async_llm.py:261] Added request cmpl-ed06df33bd0d4774bcc5427c400f6dfe-0.
INFO 03-02 01:34:17 [logger.py:42] Received request cmpl-44e59f8472b84fbeb57919c6c9461169-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:17 [async_llm.py:261] Added request cmpl-44e59f8472b84fbeb57919c6c9461169-0.
INFO 03-02 01:34:18 [logger.py:42] Received request cmpl-ea35094ecc3544029148f14e2ac2ef59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:18 [async_llm.py:261] Added request cmpl-ea35094ecc3544029148f14e2ac2ef59-0.
INFO 03-02 01:34:19 [logger.py:42] Received request cmpl-708c0c5e75d54f6dab85cf4d77536249-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:19 [async_llm.py:261] Added request cmpl-708c0c5e75d54f6dab85cf4d77536249-0.
INFO 03-02 01:34:21 [logger.py:42] Received request cmpl-a9a6398d98c3458984a31264af5d8ed8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:21 [async_llm.py:261] Added request cmpl-a9a6398d98c3458984a31264af5d8ed8-0.
INFO 03-02 01:34:22 [logger.py:42] Received request cmpl-a78205d263bb4811bed119028767f629-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:22 [async_llm.py:261] Added request cmpl-a78205d263bb4811bed119028767f629-0.
INFO 03-02 01:34:23 [logger.py:42] Received request cmpl-3e23848f80e64e9883e1262eaccc4c10-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:23 [async_llm.py:261] Added request cmpl-3e23848f80e64e9883e1262eaccc4c10-0.
INFO 03-02 01:34:24 [logger.py:42] Received request cmpl-84aab82d699c4ba386ae98a6d32f8b70-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:24 [async_llm.py:261] Added request cmpl-84aab82d699c4ba386ae98a6d32f8b70-0.
INFO 03-02 01:34:25 [logger.py:42] Received request cmpl-2e403c1f57f64d04a66b87f22764ce52-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:25 [async_llm.py:261] Added request cmpl-2e403c1f57f64d04a66b87f22764ce52-0.
INFO 03-02 01:34:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:26 [logger.py:42] Received request cmpl-6b0400b7bc014f2db1620fb9c5d9cb25-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:26 [async_llm.py:261] Added request cmpl-6b0400b7bc014f2db1620fb9c5d9cb25-0.
INFO 03-02 01:34:27 [logger.py:42] Received request cmpl-5535f4b57fd44dcc9b2b16e3160ffd5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:27 [async_llm.py:261] Added request cmpl-5535f4b57fd44dcc9b2b16e3160ffd5a-0.
INFO 03-02 01:34:28 [logger.py:42] Received request cmpl-b659b2d9cdc34373b9886023dbbf4bdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:28 [async_llm.py:261] Added request cmpl-b659b2d9cdc34373b9886023dbbf4bdd-0.
INFO 03-02 01:34:29 [logger.py:42] Received request cmpl-4925bee6777044998de7d95abda0b39e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:29 [async_llm.py:261] Added request cmpl-4925bee6777044998de7d95abda0b39e-0.
INFO 03-02 01:34:30 [logger.py:42] Received request cmpl-d6563dae1b9648e082daa9923c63611a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:30 [async_llm.py:261] Added request cmpl-d6563dae1b9648e082daa9923c63611a-0.
INFO 03-02 01:34:31 [logger.py:42] Received request cmpl-a1c85af00af342aebcd2aed890d024e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:31 [async_llm.py:261] Added request cmpl-a1c85af00af342aebcd2aed890d024e7-0.
INFO 03-02 01:34:33 [logger.py:42] Received request cmpl-6904933483694bbe9852e53035d38be1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:33 [async_llm.py:261] Added request cmpl-6904933483694bbe9852e53035d38be1-0.
INFO 03-02 01:34:34 [logger.py:42] Received request cmpl-31675fa0e83d42c791acbc78914c250f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:34 [async_llm.py:261] Added request cmpl-31675fa0e83d42c791acbc78914c250f-0.
INFO 03-02 01:34:35 [logger.py:42] Received request cmpl-182bbdfdc90044a2af893acac3983630-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:35 [async_llm.py:261] Added request cmpl-182bbdfdc90044a2af893acac3983630-0.
INFO 03-02 01:34:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:36 [logger.py:42] Received request cmpl-6afb2eb7826247448eebd49cef3e0301-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:36 [async_llm.py:261] Added request cmpl-6afb2eb7826247448eebd49cef3e0301-0.
INFO 03-02 01:34:37 [logger.py:42] Received request cmpl-8a44fed4e691480092b65d23f5f06ab4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:37 [async_llm.py:261] Added request cmpl-8a44fed4e691480092b65d23f5f06ab4-0.
INFO 03-02 01:34:38 [logger.py:42] Received request cmpl-d17fce8f289040c485724d0f6d9db142-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:38 [async_llm.py:261] Added request cmpl-d17fce8f289040c485724d0f6d9db142-0.
INFO 03-02 01:34:39 [logger.py:42] Received request cmpl-93359d6f886c42a29844574603871c0a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:39 [async_llm.py:261] Added request cmpl-93359d6f886c42a29844574603871c0a-0.
INFO 03-02 01:34:40 [logger.py:42] Received request cmpl-89e8df9c958a4c3a8afba039e89f2d8e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:40 [async_llm.py:261] Added request cmpl-89e8df9c958a4c3a8afba039e89f2d8e-0.
INFO 03-02 01:34:41 [logger.py:42] Received request cmpl-8f0396d735e94febbbfa88e6765f2c87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:41 [async_llm.py:261] Added request cmpl-8f0396d735e94febbbfa88e6765f2c87-0.
INFO 03-02 01:34:42 [logger.py:42] Received request cmpl-9c65855832ab4ad9916b7e26c45cddc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:42 [async_llm.py:261] Added request cmpl-9c65855832ab4ad9916b7e26c45cddc5-0.
INFO 03-02 01:34:44 [logger.py:42] Received request cmpl-d3707820f4de4d80a1a3408dd86b5809-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:44 [async_llm.py:261] Added request cmpl-d3707820f4de4d80a1a3408dd86b5809-0.
INFO 03-02 01:34:45 [logger.py:42] Received request cmpl-faf550b738d8492da1153cd4e2229c2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:45 [async_llm.py:261] Added request cmpl-faf550b738d8492da1153cd4e2229c2c-0.
INFO 03-02 01:34:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:46 [logger.py:42] Received request cmpl-58fd8a43495a4b659738cea7405764a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:46 [async_llm.py:261] Added request cmpl-58fd8a43495a4b659738cea7405764a2-0.
INFO 03-02 01:34:47 [logger.py:42] Received request cmpl-09e994cdd9d646b6a3922a650ef7eb40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:47 [async_llm.py:261] Added request cmpl-09e994cdd9d646b6a3922a650ef7eb40-0.
INFO 03-02 01:34:48 [logger.py:42] Received request cmpl-490156fe61964a8c8b55ea5538b33b63-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:48 [async_llm.py:261] Added request cmpl-490156fe61964a8c8b55ea5538b33b63-0.
INFO 03-02 01:34:49 [logger.py:42] Received request cmpl-bb00d52da18340eb86a983ec0166d094-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:49 [async_llm.py:261] Added request cmpl-bb00d52da18340eb86a983ec0166d094-0.
INFO 03-02 01:34:50 [logger.py:42] Received request cmpl-c38748f72e714ff3bd031662123959c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:50 [async_llm.py:261] Added request cmpl-c38748f72e714ff3bd031662123959c3-0.
INFO 03-02 01:34:51 [logger.py:42] Received request cmpl-72165507154d4362a84a58d1b316864e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:51 [async_llm.py:261] Added request cmpl-72165507154d4362a84a58d1b316864e-0.
INFO 03-02 01:34:52 [logger.py:42] Received request cmpl-c84a238f6f3f41788887c4c60e187fed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:52 [async_llm.py:261] Added request cmpl-c84a238f6f3f41788887c4c60e187fed-0.
INFO 03-02 01:34:53 [logger.py:42] Received request cmpl-51055fc1472a4a48aa70d923684bef95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:53 [async_llm.py:261] Added request cmpl-51055fc1472a4a48aa70d923684bef95-0.
INFO 03-02 01:34:55 [logger.py:42] Received request cmpl-5af8c970df8a4ee5a40f4e0fe5abb834-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:55 [async_llm.py:261] Added request cmpl-5af8c970df8a4ee5a40f4e0fe5abb834-0.
INFO 03-02 01:34:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:34:56 [logger.py:42] Received request cmpl-7e3ca9eebeaf4fad842c7dc573e44462-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:56 [async_llm.py:261] Added request cmpl-7e3ca9eebeaf4fad842c7dc573e44462-0.
INFO 03-02 01:34:57 [logger.py:42] Received request cmpl-3b2d340a3bae4c2286b7b23a60b4fdc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:57 [async_llm.py:261] Added request cmpl-3b2d340a3bae4c2286b7b23a60b4fdc9-0.
INFO 03-02 01:34:58 [logger.py:42] Received request cmpl-70fe3078d6be480cb8f8d7c3620d67be-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:58 [async_llm.py:261] Added request cmpl-70fe3078d6be480cb8f8d7c3620d67be-0.
INFO 03-02 01:34:59 [logger.py:42] Received request cmpl-61e4089911b646b2856b0bd74e578ff8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:34:59 [async_llm.py:261] Added request cmpl-61e4089911b646b2856b0bd74e578ff8-0.
INFO 03-02 01:35:00 [logger.py:42] Received request cmpl-a70316dc1e1d416e8e64a146bf0022a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:00 [async_llm.py:261] Added request cmpl-a70316dc1e1d416e8e64a146bf0022a8-0.
INFO 03-02 01:35:01 [logger.py:42] Received request cmpl-fdf03111efbd40b58becbedfce46f00a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:01 [async_llm.py:261] Added request cmpl-fdf03111efbd40b58becbedfce46f00a-0.
INFO 03-02 01:35:02 [logger.py:42] Received request cmpl-4d7a23df87954d6d9a22f5eef79027a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:02 [async_llm.py:261] Added request cmpl-4d7a23df87954d6d9a22f5eef79027a9-0.
INFO 03-02 01:35:03 [logger.py:42] Received request cmpl-ed2bb3c43f524a2b91aab9c9de0edd74-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:03 [async_llm.py:261] Added request cmpl-ed2bb3c43f524a2b91aab9c9de0edd74-0.
INFO 03-02 01:35:04 [logger.py:42] Received request cmpl-72821f73a08f4ed7942b96c07377db78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:04 [async_llm.py:261] Added request cmpl-72821f73a08f4ed7942b96c07377db78-0.
INFO 03-02 01:35:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:05 [logger.py:42] Received request cmpl-9acffaff2bb04b75b7dc29f9cec20921-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:05 [async_llm.py:261] Added request cmpl-9acffaff2bb04b75b7dc29f9cec20921-0.
INFO 03-02 01:35:07 [logger.py:42] Received request cmpl-1b84ec5337654808a7737785ebee70f8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:07 [async_llm.py:261] Added request cmpl-1b84ec5337654808a7737785ebee70f8-0.
INFO 03-02 01:35:08 [logger.py:42] Received request cmpl-85d730acb1ad4db18af3f9b415d25a95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:08 [async_llm.py:261] Added request cmpl-85d730acb1ad4db18af3f9b415d25a95-0.
INFO 03-02 01:35:09 [logger.py:42] Received request cmpl-5f4a682da3524d58805d0ff40d8f6c2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:09 [async_llm.py:261] Added request cmpl-5f4a682da3524d58805d0ff40d8f6c2a-0.
INFO 03-02 01:35:10 [logger.py:42] Received request cmpl-801bb5cc41324d66a933c38d7071620f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:10 [async_llm.py:261] Added request cmpl-801bb5cc41324d66a933c38d7071620f-0.
INFO 03-02 01:35:11 [logger.py:42] Received request cmpl-10fa7d4827c94917ae5de44159bc2924-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:11 [async_llm.py:261] Added request cmpl-10fa7d4827c94917ae5de44159bc2924-0.
INFO 03-02 01:35:12 [logger.py:42] Received request cmpl-a88bfc3d271e4f82bd44964479b11549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:12 [async_llm.py:261] Added request cmpl-a88bfc3d271e4f82bd44964479b11549-0.
INFO 03-02 01:35:13 [logger.py:42] Received request cmpl-b4ceb107a5874d09a34c3f86feb5b84a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:13 [async_llm.py:261] Added request cmpl-b4ceb107a5874d09a34c3f86feb5b84a-0.
INFO 03-02 01:35:14 [logger.py:42] Received request cmpl-7a0578b49904416ba3eee23deebeedfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:14 [async_llm.py:261] Added request cmpl-7a0578b49904416ba3eee23deebeedfa-0.
INFO 03-02 01:35:15 [logger.py:42] Received request cmpl-4361c5664b2f47adad461beb79d5e356-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:15 [async_llm.py:261] Added request cmpl-4361c5664b2f47adad461beb79d5e356-0.
INFO 03-02 01:35:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:16 [logger.py:42] Received request cmpl-bff2a8513bee4f85bdd04d223d0466ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:16 [async_llm.py:261] Added request cmpl-bff2a8513bee4f85bdd04d223d0466ef-0.
INFO 03-02 01:35:18 [logger.py:42] Received request cmpl-706b4b11e5f1403d9de2179e03189ab8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:18 [async_llm.py:261] Added request cmpl-706b4b11e5f1403d9de2179e03189ab8-0.
INFO 03-02 01:35:19 [logger.py:42] Received request cmpl-ea20b9ffb642498ab96a2f3b5272e1e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:19 [async_llm.py:261] Added request cmpl-ea20b9ffb642498ab96a2f3b5272e1e4-0.
INFO 03-02 01:35:20 [logger.py:42] Received request cmpl-8bcf1df146f54506b22500b0d31997a9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:20 [async_llm.py:261] Added request cmpl-8bcf1df146f54506b22500b0d31997a9-0.
INFO 03-02 01:35:21 [logger.py:42] Received request cmpl-e254619fab8b4b78892f96ab202327e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:21 [async_llm.py:261] Added request cmpl-e254619fab8b4b78892f96ab202327e7-0.
INFO 03-02 01:35:22 [logger.py:42] Received request cmpl-98689323a52d49448610039d1178740d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:22 [async_llm.py:261] Added request cmpl-98689323a52d49448610039d1178740d-0.
INFO 03-02 01:35:23 [logger.py:42] Received request cmpl-28ecc9c66bd845bdaee0229c8f1141d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:23 [async_llm.py:261] Added request cmpl-28ecc9c66bd845bdaee0229c8f1141d3-0.
INFO 03-02 01:35:24 [logger.py:42] Received request cmpl-58e265da69d343f18380fa9755307834-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:24 [async_llm.py:261] Added request cmpl-58e265da69d343f18380fa9755307834-0.
INFO 03-02 01:35:25 [logger.py:42] Received request cmpl-2782b92104994dd38059437dc328e599-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:25 [async_llm.py:261] Added request cmpl-2782b92104994dd38059437dc328e599-0.
INFO 03-02 01:35:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:26 [logger.py:42] Received request cmpl-e92c8c88d16e4563ab4e9a6db8002e68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:26 [async_llm.py:261] Added request cmpl-e92c8c88d16e4563ab4e9a6db8002e68-0.
INFO 03-02 01:35:27 [logger.py:42] Received request cmpl-426426452d5b422dbee01ec45dbe25c7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:27 [async_llm.py:261] Added request cmpl-426426452d5b422dbee01ec45dbe25c7-0.
INFO 03-02 01:35:28 [logger.py:42] Received request cmpl-ecbb76bdf91f443d876181282a469bf7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:28 [async_llm.py:261] Added request cmpl-ecbb76bdf91f443d876181282a469bf7-0.
INFO 03-02 01:35:30 [logger.py:42] Received request cmpl-a3b6f2a0d3fc40c1801f0c779faf5a2e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:30 [async_llm.py:261] Added request cmpl-a3b6f2a0d3fc40c1801f0c779faf5a2e-0.
INFO 03-02 01:35:31 [logger.py:42] Received request cmpl-5ca831f7e17f4bee99f8f20e3b596615-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:31 [async_llm.py:261] Added request cmpl-5ca831f7e17f4bee99f8f20e3b596615-0.
INFO 03-02 01:35:32 [logger.py:42] Received request cmpl-d3ba51dd4bd54094b6721f793e529609-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:32 [async_llm.py:261] Added request cmpl-d3ba51dd4bd54094b6721f793e529609-0.
INFO 03-02 01:35:33 [logger.py:42] Received request cmpl-9fb6c1c53af64494a276ca6c7553cee8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:33 [async_llm.py:261] Added request cmpl-9fb6c1c53af64494a276ca6c7553cee8-0.
INFO 03-02 01:35:34 [logger.py:42] Received request cmpl-3f2facb1f56f47d4a1920c08a9b91064-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:34 [async_llm.py:261] Added request cmpl-3f2facb1f56f47d4a1920c08a9b91064-0.
INFO 03-02 01:35:35 [logger.py:42] Received request cmpl-ecc978f8896b4ac0865cebb4de893ec2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:35 [async_llm.py:261] Added request cmpl-ecc978f8896b4ac0865cebb4de893ec2-0.
INFO 03-02 01:35:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:36 [logger.py:42] Received request cmpl-25d8b7f6290340609ae479aaf8803e1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:36 [async_llm.py:261] Added request cmpl-25d8b7f6290340609ae479aaf8803e1e-0.
INFO 03-02 01:35:37 [logger.py:42] Received request cmpl-c985aeb58bd94d328968b0c0ec98115b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:37 [async_llm.py:261] Added request cmpl-c985aeb58bd94d328968b0c0ec98115b-0.
INFO 03-02 01:35:38 [logger.py:42] Received request cmpl-63dbf6438f184e9c85bb4bd6089b8772-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:38 [async_llm.py:261] Added request cmpl-63dbf6438f184e9c85bb4bd6089b8772-0.
INFO 03-02 01:35:39 [logger.py:42] Received request cmpl-6862bf5f36d647b68bb9c936cf287b5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:39 [async_llm.py:261] Added request cmpl-6862bf5f36d647b68bb9c936cf287b5a-0.
INFO 03-02 01:35:41 [logger.py:42] Received request cmpl-7617cee0b8d9404eb9db5a82646b5035-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:41 [async_llm.py:261] Added request cmpl-7617cee0b8d9404eb9db5a82646b5035-0.
INFO 03-02 01:35:42 [logger.py:42] Received request cmpl-859981272f604db18ef50b3ba1b9884b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:42 [async_llm.py:261] Added request cmpl-859981272f604db18ef50b3ba1b9884b-0.
INFO 03-02 01:35:43 [logger.py:42] Received request cmpl-891b7961235d499f9f7462ce567916b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:43 [async_llm.py:261] Added request cmpl-891b7961235d499f9f7462ce567916b1-0.
INFO 03-02 01:35:44 [logger.py:42] Received request cmpl-cbbe94a30d0b401aa4630db78acbc905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:44 [async_llm.py:261] Added request cmpl-cbbe94a30d0b401aa4630db78acbc905-0.
INFO 03-02 01:35:45 [logger.py:42] Received request cmpl-529ff185fdeb4895a4473733b25d73bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:45 [async_llm.py:261] Added request cmpl-529ff185fdeb4895a4473733b25d73bc-0.
INFO 03-02 01:35:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:46 [logger.py:42] Received request cmpl-befe51042ea14fe98e838619954d4016-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:46 [async_llm.py:261] Added request cmpl-befe51042ea14fe98e838619954d4016-0.
INFO 03-02 01:35:47 [logger.py:42] Received request cmpl-72c90ee2fcea40a49c4102c38b5dc5b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:47 [async_llm.py:261] Added request cmpl-72c90ee2fcea40a49c4102c38b5dc5b2-0.
INFO 03-02 01:35:48 [logger.py:42] Received request cmpl-f6fc128e5ce0450ab347f24040447b41-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:48 [async_llm.py:261] Added request cmpl-f6fc128e5ce0450ab347f24040447b41-0.
INFO 03-02 01:35:49 [logger.py:42] Received request cmpl-580c50170df941be9fbc1d5152904d07-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:49 [async_llm.py:261] Added request cmpl-580c50170df941be9fbc1d5152904d07-0.
INFO 03-02 01:35:50 [logger.py:42] Received request cmpl-7991bbb7f8564311a2ebf32a58f941ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:50 [async_llm.py:261] Added request cmpl-7991bbb7f8564311a2ebf32a58f941ef-0.
INFO 03-02 01:35:52 [logger.py:42] Received request cmpl-7b3793b46dbe40638ddc1ece935ad86b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:52 [async_llm.py:261] Added request cmpl-7b3793b46dbe40638ddc1ece935ad86b-0.
INFO 03-02 01:35:53 [logger.py:42] Received request cmpl-2fcfbd31f8ec41f8ae494a7349cd8c80-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:53 [async_llm.py:261] Added request cmpl-2fcfbd31f8ec41f8ae494a7349cd8c80-0.
INFO 03-02 01:35:54 [logger.py:42] Received request cmpl-4b6367987f6941f3acf92a4793b91311-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:54 [async_llm.py:261] Added request cmpl-4b6367987f6941f3acf92a4793b91311-0.
INFO 03-02 01:35:55 [logger.py:42] Received request cmpl-eddad15b9a6e4c52a46d6142837d406f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:55 [async_llm.py:261] Added request cmpl-eddad15b9a6e4c52a46d6142837d406f-0.
INFO 03-02 01:35:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:35:56 [logger.py:42] Received request cmpl-0f46c133e5cb4941ba4931145b052b68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:56 [async_llm.py:261] Added request cmpl-0f46c133e5cb4941ba4931145b052b68-0.
INFO 03-02 01:35:57 [logger.py:42] Received request cmpl-e10210f6e64b4969bdca2dbbcbc62ef1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:57 [async_llm.py:261] Added request cmpl-e10210f6e64b4969bdca2dbbcbc62ef1-0.
INFO 03-02 01:35:58 [logger.py:42] Received request cmpl-4c074b49470e44d3ae14da1ed469326f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:58 [async_llm.py:261] Added request cmpl-4c074b49470e44d3ae14da1ed469326f-0.
INFO 03-02 01:35:59 [logger.py:42] Received request cmpl-aba4342c758b4659ad71fc86e157aaa4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:35:59 [async_llm.py:261] Added request cmpl-aba4342c758b4659ad71fc86e157aaa4-0.
INFO 03-02 01:36:00 [logger.py:42] Received request cmpl-4c6cd8f7783a4a27ad63c0fb412d620b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:00 [async_llm.py:261] Added request cmpl-4c6cd8f7783a4a27ad63c0fb412d620b-0.
INFO 03-02 01:36:01 [logger.py:42] Received request cmpl-758df980492e4916b040882a4becb08e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:01 [async_llm.py:261] Added request cmpl-758df980492e4916b040882a4becb08e-0.
INFO 03-02 01:36:02 [logger.py:42] Received request cmpl-b5f446928539497b8efd088544ec809d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:02 [async_llm.py:261] Added request cmpl-b5f446928539497b8efd088544ec809d-0.
INFO 03-02 01:36:04 [logger.py:42] Received request cmpl-6f0b626bd1e74bff92b9911a59fed528-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:04 [async_llm.py:261] Added request cmpl-6f0b626bd1e74bff92b9911a59fed528-0.
INFO 03-02 01:36:05 [logger.py:42] Received request cmpl-e54619a1a6d64398862038293676ec19-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:05 [async_llm.py:261] Added request cmpl-e54619a1a6d64398862038293676ec19-0.
INFO 03-02 01:36:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:06 [logger.py:42] Received request cmpl-65f74a6c547443caa2c9a205bc8c1906-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:06 [async_llm.py:261] Added request cmpl-65f74a6c547443caa2c9a205bc8c1906-0.
INFO 03-02 01:36:07 [logger.py:42] Received request cmpl-3285b363b9a649c49673dc2ec3f462f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:07 [async_llm.py:261] Added request cmpl-3285b363b9a649c49673dc2ec3f462f6-0.
INFO 03-02 01:36:08 [logger.py:42] Received request cmpl-a6837922976144d1a36a9f0c90c4f042-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:08 [async_llm.py:261] Added request cmpl-a6837922976144d1a36a9f0c90c4f042-0.
INFO 03-02 01:36:09 [logger.py:42] Received request cmpl-8f06be323f4245b7bad5dedd75e551e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:09 [async_llm.py:261] Added request cmpl-8f06be323f4245b7bad5dedd75e551e1-0.
INFO 03-02 01:36:10 [logger.py:42] Received request cmpl-9d64c45acfb34efd9215c9de0ff4e3f6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:10 [async_llm.py:261] Added request cmpl-9d64c45acfb34efd9215c9de0ff4e3f6-0.
INFO 03-02 01:36:11 [logger.py:42] Received request cmpl-fbee6df245f841d6aab81682f73832eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:11 [async_llm.py:261] Added request cmpl-fbee6df245f841d6aab81682f73832eb-0.
INFO 03-02 01:36:12 [logger.py:42] Received request cmpl-b0737867624547b893d3de4d498bddf5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:12 [async_llm.py:261] Added request cmpl-b0737867624547b893d3de4d498bddf5-0.
INFO 03-02 01:36:13 [logger.py:42] Received request cmpl-9fbf26307de74b03b61dda32d928cf89-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:13 [async_llm.py:261] Added request cmpl-9fbf26307de74b03b61dda32d928cf89-0.
INFO 03-02 01:36:15 [logger.py:42] Received request cmpl-849cd7e569e240659f21182b656670a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:15 [async_llm.py:261] Added request cmpl-849cd7e569e240659f21182b656670a5-0.
INFO 03-02 01:36:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:16 [logger.py:42] Received request cmpl-8bb8cf6dc5894f818f09d49b341134d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:16 [async_llm.py:261] Added request cmpl-8bb8cf6dc5894f818f09d49b341134d6-0.
INFO 03-02 01:36:17 [logger.py:42] Received request cmpl-03039c6a95a9436f949330f37eb04ae9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:17 [async_llm.py:261] Added request cmpl-03039c6a95a9436f949330f37eb04ae9-0.
INFO 03-02 01:36:18 [logger.py:42] Received request cmpl-c636e3a27725454e88c6bae150481ee5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:18 [async_llm.py:261] Added request cmpl-c636e3a27725454e88c6bae150481ee5-0.
INFO 03-02 01:36:19 [logger.py:42] Received request cmpl-69454e0787364ac0be60ff27b9191a9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:19 [async_llm.py:261] Added request cmpl-69454e0787364ac0be60ff27b9191a9c-0.
INFO 03-02 01:36:20 [logger.py:42] Received request cmpl-e572d2069069454d9a56c172bbda7025-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:20 [async_llm.py:261] Added request cmpl-e572d2069069454d9a56c172bbda7025-0.
INFO 03-02 01:36:21 [logger.py:42] Received request cmpl-7e330bd8182a433cbda4c2a787091d39-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:21 [async_llm.py:261] Added request cmpl-7e330bd8182a433cbda4c2a787091d39-0.
INFO 03-02 01:36:22 [logger.py:42] Received request cmpl-8d14b399888542fea14d5b6752052925-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:22 [async_llm.py:261] Added request cmpl-8d14b399888542fea14d5b6752052925-0.
INFO 03-02 01:36:23 [logger.py:42] Received request cmpl-d12f8fed733f400f8a9b7b6fd0725e44-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:23 [async_llm.py:261] Added request cmpl-d12f8fed733f400f8a9b7b6fd0725e44-0.
INFO 03-02 01:36:24 [logger.py:42] Received request cmpl-5739905c42b548438bbdbacb2a714dfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:24 [async_llm.py:261] Added request cmpl-5739905c42b548438bbdbacb2a714dfb-0.
INFO 03-02 01:36:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:26 [logger.py:42] Received request cmpl-80c7579f152b463f87f46de8cdfa61ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:26 [async_llm.py:261] Added request cmpl-80c7579f152b463f87f46de8cdfa61ad-0.
INFO 03-02 01:36:27 [logger.py:42] Received request cmpl-e8fa31abe6c3406bad193502cf7b8cce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:27 [async_llm.py:261] Added request cmpl-e8fa31abe6c3406bad193502cf7b8cce-0.
INFO 03-02 01:36:28 [logger.py:42] Received request cmpl-a86677379ccc4993851c2db8c8b77eb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:28 [async_llm.py:261] Added request cmpl-a86677379ccc4993851c2db8c8b77eb6-0.
INFO 03-02 01:36:29 [logger.py:42] Received request cmpl-6bdb2a49ebc14ff082fec6445880ce55-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:29 [async_llm.py:261] Added request cmpl-6bdb2a49ebc14ff082fec6445880ce55-0.
INFO 03-02 01:36:30 [logger.py:42] Received request cmpl-a58de5b5636f4a2f9049d683168f4147-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:30 [async_llm.py:261] Added request cmpl-a58de5b5636f4a2f9049d683168f4147-0.
INFO 03-02 01:36:31 [logger.py:42] Received request cmpl-323b9ea18d5d44a5a9cb0cd7c32693a4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:31 [async_llm.py:261] Added request cmpl-323b9ea18d5d44a5a9cb0cd7c32693a4-0.
INFO 03-02 01:36:32 [logger.py:42] Received request cmpl-2ad8387dcc3d46798720299d182c4bde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:32 [async_llm.py:261] Added request cmpl-2ad8387dcc3d46798720299d182c4bde-0.
INFO 03-02 01:36:33 [logger.py:42] Received request cmpl-94466d58161c4027865ec615114ebe7f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:33 [async_llm.py:261] Added request cmpl-94466d58161c4027865ec615114ebe7f-0.
INFO 03-02 01:36:34 [logger.py:42] Received request cmpl-653008385f03431c808d3d1e8ae8fcce-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:34 [async_llm.py:261] Added request cmpl-653008385f03431c808d3d1e8ae8fcce-0.
INFO 03-02 01:36:35 [logger.py:42] Received request cmpl-1d5854a60dcd4ca2ad17416fb6dcf2f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:35 [async_llm.py:261] Added request cmpl-1d5854a60dcd4ca2ad17416fb6dcf2f5-0.
INFO 03-02 01:36:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:36 [logger.py:42] Received request cmpl-038c15347e334dc28822b4b2b34cc7ad-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:36 [async_llm.py:261] Added request cmpl-038c15347e334dc28822b4b2b34cc7ad-0.
INFO 03-02 01:36:38 [logger.py:42] Received request cmpl-954098a9e2f341aab35eeed315bd8577-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:38 [async_llm.py:261] Added request cmpl-954098a9e2f341aab35eeed315bd8577-0.
INFO 03-02 01:36:39 [logger.py:42] Received request cmpl-fe212a83b5eb4b5e88e8cede20a60010-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:39 [async_llm.py:261] Added request cmpl-fe212a83b5eb4b5e88e8cede20a60010-0.
INFO 03-02 01:36:40 [logger.py:42] Received request cmpl-4b1abec3a81a4d02b8d931affb38c1f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:40 [async_llm.py:261] Added request cmpl-4b1abec3a81a4d02b8d931affb38c1f1-0.
INFO 03-02 01:36:41 [logger.py:42] Received request cmpl-22e698ca19ea430a9b07121ce00e66ee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:41 [async_llm.py:261] Added request cmpl-22e698ca19ea430a9b07121ce00e66ee-0.
INFO 03-02 01:36:42 [logger.py:42] Received request cmpl-01a2b8b5f8194cc3b9974ec77740fdcc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:42 [async_llm.py:261] Added request cmpl-01a2b8b5f8194cc3b9974ec77740fdcc-0.
INFO 03-02 01:36:43 [logger.py:42] Received request cmpl-156c74d2dea64fb4ba3e292ae70aee34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:43 [async_llm.py:261] Added request cmpl-156c74d2dea64fb4ba3e292ae70aee34-0.
INFO 03-02 01:36:44 [logger.py:42] Received request cmpl-d6c65784c7a84230a7a0595900af3de9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:44 [async_llm.py:261] Added request cmpl-d6c65784c7a84230a7a0595900af3de9-0.
INFO 03-02 01:36:45 [logger.py:42] Received request cmpl-62d498010b7c4c988372222bcd11fdcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:45 [async_llm.py:261] Added request cmpl-62d498010b7c4c988372222bcd11fdcd-0.
INFO 03-02 01:36:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:46 [logger.py:42] Received request cmpl-f85fd689f38b47c180a954e34e3c2557-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:46 [async_llm.py:261] Added request cmpl-f85fd689f38b47c180a954e34e3c2557-0.
INFO 03-02 01:36:47 [logger.py:42] Received request cmpl-f882c7ad476f4a6886296023599004ae-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:47 [async_llm.py:261] Added request cmpl-f882c7ad476f4a6886296023599004ae-0.
INFO 03-02 01:36:49 [logger.py:42] Received request cmpl-d27f2e6971d442a18c7a88531a15f086-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:49 [async_llm.py:261] Added request cmpl-d27f2e6971d442a18c7a88531a15f086-0.
INFO 03-02 01:36:50 [logger.py:42] Received request cmpl-bcb8f4be25bb40c2b6d3e9ba413d2363-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:50 [async_llm.py:261] Added request cmpl-bcb8f4be25bb40c2b6d3e9ba413d2363-0.
INFO 03-02 01:36:51 [logger.py:42] Received request cmpl-ccfec958de064998bc4abc50fab38f3f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:51 [async_llm.py:261] Added request cmpl-ccfec958de064998bc4abc50fab38f3f-0.
INFO 03-02 01:36:52 [logger.py:42] Received request cmpl-a29ad853355a4669a0461cd37885c626-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:52 [async_llm.py:261] Added request cmpl-a29ad853355a4669a0461cd37885c626-0.
INFO 03-02 01:36:53 [logger.py:42] Received request cmpl-d4d7473e39414dd5a3abb78d55956ac2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:53 [async_llm.py:261] Added request cmpl-d4d7473e39414dd5a3abb78d55956ac2-0.
INFO 03-02 01:36:54 [logger.py:42] Received request cmpl-0a6e91d66d474f6cbbf6a033b42df569-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:54 [async_llm.py:261] Added request cmpl-0a6e91d66d474f6cbbf6a033b42df569-0.
INFO 03-02 01:36:55 [logger.py:42] Received request cmpl-71711c245b654e29890ab8c7cfc6d2e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:55 [async_llm.py:261] Added request cmpl-71711c245b654e29890ab8c7cfc6d2e0-0.
INFO 03-02 01:36:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:36:56 [logger.py:42] Received request cmpl-4bd600ab4b2a45a991230a507f887227-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:56 [async_llm.py:261] Added request cmpl-4bd600ab4b2a45a991230a507f887227-0.
INFO 03-02 01:36:57 [logger.py:42] Received request cmpl-935a02c335fa432c8803023d19151bee-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:57 [async_llm.py:261] Added request cmpl-935a02c335fa432c8803023d19151bee-0.
INFO 03-02 01:36:58 [logger.py:42] Received request cmpl-aaa91efa3e194f679b8e0e449c841c61-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:36:58 [async_llm.py:261] Added request cmpl-aaa91efa3e194f679b8e0e449c841c61-0.
INFO 03-02 01:37:00 [logger.py:42] Received request cmpl-f57ba03791df45c194482133fbaceeac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:00 [async_llm.py:261] Added request cmpl-f57ba03791df45c194482133fbaceeac-0.
INFO 03-02 01:37:01 [logger.py:42] Received request cmpl-edc2df8c23604601b50ca0f740588ec2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:01 [async_llm.py:261] Added request cmpl-edc2df8c23604601b50ca0f740588ec2-0.
INFO 03-02 01:37:02 [logger.py:42] Received request cmpl-c3ecd3b8c2b54027a2df888b7f84db09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:02 [async_llm.py:261] Added request cmpl-c3ecd3b8c2b54027a2df888b7f84db09-0.
INFO 03-02 01:37:03 [logger.py:42] Received request cmpl-19d08f2eff8749d3b4c44c6c337e7905-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:03 [async_llm.py:261] Added request cmpl-19d08f2eff8749d3b4c44c6c337e7905-0.
INFO 03-02 01:37:04 [logger.py:42] Received request cmpl-15226778ee5f40008333412ef4c21579-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:04 [async_llm.py:261] Added request cmpl-15226778ee5f40008333412ef4c21579-0.
INFO 03-02 01:37:05 [logger.py:42] Received request cmpl-bfeae8e571344b33bf7bf1754f0681f0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:05 [async_llm.py:261] Added request cmpl-bfeae8e571344b33bf7bf1754f0681f0-0.
INFO 03-02 01:37:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:06 [logger.py:42] Received request cmpl-146eb1b4d8ba42b5ae255f592e670d84-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:06 [async_llm.py:261] Added request cmpl-146eb1b4d8ba42b5ae255f592e670d84-0.
INFO 03-02 01:37:07 [logger.py:42] Received request cmpl-d6a39d844785499da6b287142e64dc47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:07 [async_llm.py:261] Added request cmpl-d6a39d844785499da6b287142e64dc47-0.
INFO 03-02 01:37:08 [logger.py:42] Received request cmpl-9fd89745a5534539b20e89bf5a2755ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:08 [async_llm.py:261] Added request cmpl-9fd89745a5534539b20e89bf5a2755ec-0.
INFO 03-02 01:37:09 [logger.py:42] Received request cmpl-b3b3ee2779ee479eb6c82c265e27743b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:09 [async_llm.py:261] Added request cmpl-b3b3ee2779ee479eb6c82c265e27743b-0.
INFO 03-02 01:37:10 [logger.py:42] Received request cmpl-00de65920c694d068080e04c92599eb9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:10 [async_llm.py:261] Added request cmpl-00de65920c694d068080e04c92599eb9-0.
INFO 03-02 01:37:12 [logger.py:42] Received request cmpl-6d3c61c0f5174a609aa1ea237ee070e0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:12 [async_llm.py:261] Added request cmpl-6d3c61c0f5174a609aa1ea237ee070e0-0.
INFO 03-02 01:37:13 [logger.py:42] Received request cmpl-574b57fc21e140a4976814610d0c1cf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:13 [async_llm.py:261] Added request cmpl-574b57fc21e140a4976814610d0c1cf6-0.
INFO 03-02 01:37:14 [logger.py:42] Received request cmpl-486660f91f0042598c3dbe0aed9f80c3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:14 [async_llm.py:261] Added request cmpl-486660f91f0042598c3dbe0aed9f80c3-0.
INFO 03-02 01:37:15 [logger.py:42] Received request cmpl-ae79cafb48e84359a54260792a0aa3a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:15 [async_llm.py:261] Added request cmpl-ae79cafb48e84359a54260792a0aa3a2-0.
INFO 03-02 01:37:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:16 [logger.py:42] Received request cmpl-c3659f39d776406e85816240f1272661-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:16 [async_llm.py:261] Added request cmpl-c3659f39d776406e85816240f1272661-0.
INFO 03-02 01:37:17 [logger.py:42] Received request cmpl-50eb841eb45a459bbf635a4bbc7458f1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:17 [async_llm.py:261] Added request cmpl-50eb841eb45a459bbf635a4bbc7458f1-0.
INFO 03-02 01:37:18 [logger.py:42] Received request cmpl-d9da44290a3648669de4305b53d8b8fa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:18 [async_llm.py:261] Added request cmpl-d9da44290a3648669de4305b53d8b8fa-0.
INFO 03-02 01:37:19 [logger.py:42] Received request cmpl-5a3d71ca1ee2418fa927cf4c5ef90a24-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:19 [async_llm.py:261] Added request cmpl-5a3d71ca1ee2418fa927cf4c5ef90a24-0.
INFO 03-02 01:37:20 [logger.py:42] Received request cmpl-a49a0eeb9b954ff1b729f878e5457f87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:20 [async_llm.py:261] Added request cmpl-a49a0eeb9b954ff1b729f878e5457f87-0.
INFO 03-02 01:37:21 [logger.py:42] Received request cmpl-17e7ed4da5eb42138f0271600135b0d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:21 [async_llm.py:261] Added request cmpl-17e7ed4da5eb42138f0271600135b0d8-0.
INFO 03-02 01:37:23 [logger.py:42] Received request cmpl-06b988ffdf38461ba40340bec8b579e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:23 [async_llm.py:261] Added request cmpl-06b988ffdf38461ba40340bec8b579e6-0.
INFO 03-02 01:37:24 [logger.py:42] Received request cmpl-98cb7d5df6e7420cadf7ef8c40b8d35b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:24 [async_llm.py:261] Added request cmpl-98cb7d5df6e7420cadf7ef8c40b8d35b-0.
INFO 03-02 01:37:25 [logger.py:42] Received request cmpl-53a967fea94e49988a3542dc1a371a1e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:25 [async_llm.py:261] Added request cmpl-53a967fea94e49988a3542dc1a371a1e-0.
INFO 03-02 01:37:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:26 [logger.py:42] Received request cmpl-f1f49e33b4fd4ce2a096ec2be4322c2c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:26 [async_llm.py:261] Added request cmpl-f1f49e33b4fd4ce2a096ec2be4322c2c-0.
INFO 03-02 01:37:27 [logger.py:42] Received request cmpl-e9872e27b2d74e30af4be22427652e2a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:27 [async_llm.py:261] Added request cmpl-e9872e27b2d74e30af4be22427652e2a-0.
INFO 03-02 01:37:28 [logger.py:42] Received request cmpl-47c8cf8b9b7f4247976714a7c3c9a9e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:28 [async_llm.py:261] Added request cmpl-47c8cf8b9b7f4247976714a7c3c9a9e1-0.
INFO 03-02 01:37:29 [logger.py:42] Received request cmpl-ad73a11a1b464556af3df8ccd85b4aaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:29 [async_llm.py:261] Added request cmpl-ad73a11a1b464556af3df8ccd85b4aaf-0.
INFO 03-02 01:37:30 [logger.py:42] Received request cmpl-519b6a40df78436fbd3bde788eb72a75-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:30 [async_llm.py:261] Added request cmpl-519b6a40df78436fbd3bde788eb72a75-0.
INFO 03-02 01:37:31 [logger.py:42] Received request cmpl-2ce3cb875e6346a3b0c77425e7efcec6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:31 [async_llm.py:261] Added request cmpl-2ce3cb875e6346a3b0c77425e7efcec6-0.
INFO 03-02 01:37:32 [logger.py:42] Received request cmpl-8967532db8884defa5885f9f60d33b4f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:32 [async_llm.py:261] Added request cmpl-8967532db8884defa5885f9f60d33b4f-0.
INFO 03-02 01:37:33 [logger.py:42] Received request cmpl-72c1bc66b9b748e7816c285679a76fb7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:33 [async_llm.py:261] Added request cmpl-72c1bc66b9b748e7816c285679a76fb7-0.
INFO 03-02 01:37:35 [logger.py:42] Received request cmpl-5716957c93f041629d8a5d1f67aeb302-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:35 [async_llm.py:261] Added request cmpl-5716957c93f041629d8a5d1f67aeb302-0.
INFO 03-02 01:37:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:36 [logger.py:42] Received request cmpl-e414ae5ab4ff4d5aaab1155d87f8159c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:36 [async_llm.py:261] Added request cmpl-e414ae5ab4ff4d5aaab1155d87f8159c-0.
INFO 03-02 01:37:37 [logger.py:42] Received request cmpl-c6f60b4bb16144c8b6df118fa325ab6e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:37 [async_llm.py:261] Added request cmpl-c6f60b4bb16144c8b6df118fa325ab6e-0.
INFO 03-02 01:37:38 [logger.py:42] Received request cmpl-2f9c4b08a0e34ad2895cdc630758d598-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:38 [async_llm.py:261] Added request cmpl-2f9c4b08a0e34ad2895cdc630758d598-0.
INFO 03-02 01:37:39 [logger.py:42] Received request cmpl-0e1939aa80fe466eaa40c89c2429dcaf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:39 [async_llm.py:261] Added request cmpl-0e1939aa80fe466eaa40c89c2429dcaf-0.
INFO 03-02 01:37:40 [logger.py:42] Received request cmpl-6f5f9b45461f43479eaa6d852c7ddbb6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:40 [async_llm.py:261] Added request cmpl-6f5f9b45461f43479eaa6d852c7ddbb6-0.
INFO 03-02 01:37:41 [logger.py:42] Received request cmpl-82c689cb2e4b4d1e97e8357dbe23cb33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:41 [async_llm.py:261] Added request cmpl-82c689cb2e4b4d1e97e8357dbe23cb33-0.
INFO 03-02 01:37:42 [logger.py:42] Received request cmpl-99bfa271f40f417e97e430e498c18554-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:42 [async_llm.py:261] Added request cmpl-99bfa271f40f417e97e430e498c18554-0.
INFO 03-02 01:37:43 [logger.py:42] Received request cmpl-61a9eb1a490c4a4689312aa9395a54a6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:43 [async_llm.py:261] Added request cmpl-61a9eb1a490c4a4689312aa9395a54a6-0.
INFO 03-02 01:37:44 [logger.py:42] Received request cmpl-7e25158492644f6cbe45eb178b6bee2d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:44 [async_llm.py:261] Added request cmpl-7e25158492644f6cbe45eb178b6bee2d-0.
INFO 03-02 01:37:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:45 [logger.py:42] Received request cmpl-5636f792560f4885b73868f7e30164c2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:45 [async_llm.py:261] Added request cmpl-5636f792560f4885b73868f7e30164c2-0.
INFO 03-02 01:37:47 [logger.py:42] Received request cmpl-b25b73fc289c45e28af7de7316621e47-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:47 [async_llm.py:261] Added request cmpl-b25b73fc289c45e28af7de7316621e47-0.
INFO 03-02 01:37:48 [logger.py:42] Received request cmpl-2a46bd21a3b249ab9db63fb1da3330b2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:48 [async_llm.py:261] Added request cmpl-2a46bd21a3b249ab9db63fb1da3330b2-0.
INFO 03-02 01:37:49 [logger.py:42] Received request cmpl-e29c231bc0994f9081bdfadca1641cc8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:49 [async_llm.py:261] Added request cmpl-e29c231bc0994f9081bdfadca1641cc8-0.
INFO 03-02 01:37:50 [logger.py:42] Received request cmpl-023b709b3dca42098c7c60c09a158666-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:50 [async_llm.py:261] Added request cmpl-023b709b3dca42098c7c60c09a158666-0.
INFO 03-02 01:37:51 [logger.py:42] Received request cmpl-1a8f37af95f04f21b1dfd7437dd2e5ef-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:51 [async_llm.py:261] Added request cmpl-1a8f37af95f04f21b1dfd7437dd2e5ef-0.
INFO 03-02 01:37:52 [logger.py:42] Received request cmpl-6411c78c88d74e02ac8e3c84bd2370f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:52 [async_llm.py:261] Added request cmpl-6411c78c88d74e02ac8e3c84bd2370f5-0.
INFO 03-02 01:37:53 [logger.py:42] Received request cmpl-d18115a076c144f1834b15a60800e7fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:53 [async_llm.py:261] Added request cmpl-d18115a076c144f1834b15a60800e7fe-0.
INFO 03-02 01:37:54 [logger.py:42] Received request cmpl-b951bf7152584998b76bc0916512d67a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:54 [async_llm.py:261] Added request cmpl-b951bf7152584998b76bc0916512d67a-0.
INFO 03-02 01:37:55 [logger.py:42] Received request cmpl-779b5fc8edca4d17bb70a82a000a5bfa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:55 [async_llm.py:261] Added request cmpl-779b5fc8edca4d17bb70a82a000a5bfa-0.
INFO 03-02 01:37:55 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:37:56 [logger.py:42] Received request cmpl-d9f9509768cc44af871050af7e0c16fd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:56 [async_llm.py:261] Added request cmpl-d9f9509768cc44af871050af7e0c16fd-0.
INFO 03-02 01:37:58 [logger.py:42] Received request cmpl-4bd43d0490a647ed9012c41a46cab1a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:58 [async_llm.py:261] Added request cmpl-4bd43d0490a647ed9012c41a46cab1a8-0.
INFO 03-02 01:37:59 [logger.py:42] Received request cmpl-b540b15497c246fbaa31602acdd9871d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:37:59 [async_llm.py:261] Added request cmpl-b540b15497c246fbaa31602acdd9871d-0.
INFO 03-02 01:38:00 [logger.py:42] Received request cmpl-492b70b5bdbe4fc993787dd0a20674a5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:00 [async_llm.py:261] Added request cmpl-492b70b5bdbe4fc993787dd0a20674a5-0.
INFO 03-02 01:38:01 [logger.py:42] Received request cmpl-7e70a3156c3a4d1db6bc6024fc50572c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:01 [async_llm.py:261] Added request cmpl-7e70a3156c3a4d1db6bc6024fc50572c-0.
INFO 03-02 01:38:02 [logger.py:42] Received request cmpl-70455f7c44f6434bbb61ecc98f1982a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:02 [async_llm.py:261] Added request cmpl-70455f7c44f6434bbb61ecc98f1982a8-0.
INFO 03-02 01:38:03 [logger.py:42] Received request cmpl-1185170230b842409e034ad5060eb356-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:03 [async_llm.py:261] Added request cmpl-1185170230b842409e034ad5060eb356-0.
INFO 03-02 01:38:04 [logger.py:42] Received request cmpl-3061d7b1c37345adaf598f07e7603a11-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:04 [async_llm.py:261] Added request cmpl-3061d7b1c37345adaf598f07e7603a11-0.
INFO 03-02 01:38:05 [logger.py:42] Received request cmpl-ccfde42dd31c47659aafc7e1269613db-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:05 [async_llm.py:261] Added request cmpl-ccfde42dd31c47659aafc7e1269613db-0.
INFO 03-02 01:38:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:06 [logger.py:42] Received request cmpl-430af5ef4e60468f993b3756ceec15a1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:06 [async_llm.py:261] Added request cmpl-430af5ef4e60468f993b3756ceec15a1-0.
INFO 03-02 01:38:07 [logger.py:42] Received request cmpl-537c8062e4c14abb854132afdfa170da-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:07 [async_llm.py:261] Added request cmpl-537c8062e4c14abb854132afdfa170da-0.
INFO 03-02 01:38:09 [logger.py:42] Received request cmpl-379d1159a09a4501b9239cead3fdd6ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:09 [async_llm.py:261] Added request cmpl-379d1159a09a4501b9239cead3fdd6ff-0.
INFO 03-02 01:38:10 [logger.py:42] Received request cmpl-564bd7a7155a4c25b9099a8fa2d705ac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:10 [async_llm.py:261] Added request cmpl-564bd7a7155a4c25b9099a8fa2d705ac-0.
INFO 03-02 01:38:11 [logger.py:42] Received request cmpl-cbc6e27e836c46a284467645baaa664d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:11 [async_llm.py:261] Added request cmpl-cbc6e27e836c46a284467645baaa664d-0.
INFO 03-02 01:38:12 [logger.py:42] Received request cmpl-5b5db9a9207347b4b839635169871444-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:12 [async_llm.py:261] Added request cmpl-5b5db9a9207347b4b839635169871444-0.
INFO 03-02 01:38:13 [logger.py:42] Received request cmpl-e32a243c5a9b4a59bfd8d8978bbbdc71-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:13 [async_llm.py:261] Added request cmpl-e32a243c5a9b4a59bfd8d8978bbbdc71-0.
INFO 03-02 01:38:14 [logger.py:42] Received request cmpl-b31999f7a4df461fba10c28221c57941-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:14 [async_llm.py:261] Added request cmpl-b31999f7a4df461fba10c28221c57941-0.
INFO 03-02 01:38:15 [logger.py:42] Received request cmpl-d8a32912a2264d60b9392211443b1f30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:15 [async_llm.py:261] Added request cmpl-d8a32912a2264d60b9392211443b1f30-0.
INFO 03-02 01:38:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:16 [logger.py:42] Received request cmpl-b5b95b0779824d93b48fc33af6dccda7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:16 [async_llm.py:261] Added request cmpl-b5b95b0779824d93b48fc33af6dccda7-0.
INFO 03-02 01:38:17 [logger.py:42] Received request cmpl-b29ab6644f3a49b08207b0a592a2d4c5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:17 [async_llm.py:261] Added request cmpl-b29ab6644f3a49b08207b0a592a2d4c5-0.
INFO 03-02 01:38:18 [logger.py:42] Received request cmpl-55849bac09b64a3ba777c51665ea4bd6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:18 [async_llm.py:261] Added request cmpl-55849bac09b64a3ba777c51665ea4bd6-0.
INFO 03-02 01:38:19 [logger.py:42] Received request cmpl-75e924b1bc04401781b798bb9624c181-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:19 [async_llm.py:261] Added request cmpl-75e924b1bc04401781b798bb9624c181-0.
INFO 03-02 01:38:21 [logger.py:42] Received request cmpl-2152ffa1bb90492cb7cfa8f4061d6a6a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:21 [async_llm.py:261] Added request cmpl-2152ffa1bb90492cb7cfa8f4061d6a6a-0.
INFO 03-02 01:38:22 [logger.py:42] Received request cmpl-3b83cd6f06ab4e18a437b232394f635f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:22 [async_llm.py:261] Added request cmpl-3b83cd6f06ab4e18a437b232394f635f-0.
INFO 03-02 01:38:23 [logger.py:42] Received request cmpl-ca4f7ee300134f7793f66d7a8c833c33-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:23 [async_llm.py:261] Added request cmpl-ca4f7ee300134f7793f66d7a8c833c33-0.
INFO 03-02 01:38:24 [logger.py:42] Received request cmpl-36bada0e9ce94d0396149fcdcff17995-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:24 [async_llm.py:261] Added request cmpl-36bada0e9ce94d0396149fcdcff17995-0.
INFO 03-02 01:38:25 [logger.py:42] Received request cmpl-53a8e951f2c248a392e3d3bdc35c96ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:25 [async_llm.py:261] Added request cmpl-53a8e951f2c248a392e3d3bdc35c96ea-0.
INFO 03-02 01:38:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:26 [logger.py:42] Received request cmpl-1590453635fe488491f2812531cff988-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:26 [async_llm.py:261] Added request cmpl-1590453635fe488491f2812531cff988-0.
INFO 03-02 01:38:27 [logger.py:42] Received request cmpl-98b09b85b91742ed9a8808fb638bee2f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:27 [async_llm.py:261] Added request cmpl-98b09b85b91742ed9a8808fb638bee2f-0.
INFO 03-02 01:38:28 [logger.py:42] Received request cmpl-0f76475cb41245ddb9cca1024bb463e1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:28 [async_llm.py:261] Added request cmpl-0f76475cb41245ddb9cca1024bb463e1-0.
INFO 03-02 01:38:29 [logger.py:42] Received request cmpl-8dd6b599905245cda83b7a8042c4d2a8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:29 [async_llm.py:261] Added request cmpl-8dd6b599905245cda83b7a8042c4d2a8-0.
INFO 03-02 01:38:30 [logger.py:42] Received request cmpl-23fa89e3946e44c9a26897eab2cf75d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:30 [async_llm.py:261] Added request cmpl-23fa89e3946e44c9a26897eab2cf75d6-0.
INFO 03-02 01:38:32 [logger.py:42] Received request cmpl-f0134d97225e48f8ac11b7c6ee7d3cca-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:32 [async_llm.py:261] Added request cmpl-f0134d97225e48f8ac11b7c6ee7d3cca-0.
INFO 03-02 01:38:33 [logger.py:42] Received request cmpl-4fddc7d8cd8a4149b4b0145e890f480b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:33 [async_llm.py:261] Added request cmpl-4fddc7d8cd8a4149b4b0145e890f480b-0.
INFO 03-02 01:38:34 [logger.py:42] Received request cmpl-c203b05915bc4c8fac93352d21dc54c4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:34 [async_llm.py:261] Added request cmpl-c203b05915bc4c8fac93352d21dc54c4-0.
INFO 03-02 01:38:35 [logger.py:42] Received request cmpl-baf08f2dd3b84008a2273ac342a464aa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:35 [async_llm.py:261] Added request cmpl-baf08f2dd3b84008a2273ac342a464aa-0.
INFO 03-02 01:38:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:36 [logger.py:42] Received request cmpl-b8db6bc870db443188fc8a4f45a66799-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:36 [async_llm.py:261] Added request cmpl-b8db6bc870db443188fc8a4f45a66799-0.
INFO 03-02 01:38:37 [logger.py:42] Received request cmpl-3ed58b35456d47c292eb9d26ef60b3e3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:37 [async_llm.py:261] Added request cmpl-3ed58b35456d47c292eb9d26ef60b3e3-0.
INFO 03-02 01:38:38 [logger.py:42] Received request cmpl-0a2daee078f64a949047ec53d8d82c1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:38 [async_llm.py:261] Added request cmpl-0a2daee078f64a949047ec53d8d82c1a-0.
INFO 03-02 01:38:39 [logger.py:42] Received request cmpl-cba35e155e414de9a20212c1aa725405-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:39 [async_llm.py:261] Added request cmpl-cba35e155e414de9a20212c1aa725405-0.
INFO 03-02 01:38:40 [logger.py:42] Received request cmpl-1b670bfd39ec46e6a517d53a5fb71ce7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:40 [async_llm.py:261] Added request cmpl-1b670bfd39ec46e6a517d53a5fb71ce7-0.
INFO 03-02 01:38:41 [logger.py:42] Received request cmpl-4fc8c8815e82487ba8aa0a60693eb19a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:41 [async_llm.py:261] Added request cmpl-4fc8c8815e82487ba8aa0a60693eb19a-0.
INFO 03-02 01:38:43 [logger.py:42] Received request cmpl-05223c88ac344168b1fb746f3f2039e7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:43 [async_llm.py:261] Added request cmpl-05223c88ac344168b1fb746f3f2039e7-0.
INFO 03-02 01:38:44 [logger.py:42] Received request cmpl-f2496cfe0de142b1be48c0ea3fc1daec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:44 [async_llm.py:261] Added request cmpl-f2496cfe0de142b1be48c0ea3fc1daec-0.
INFO 03-02 01:38:45 [logger.py:42] Received request cmpl-c1ef54a82e0846da9530046f3e21ac95-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:45 [async_llm.py:261] Added request cmpl-c1ef54a82e0846da9530046f3e21ac95-0.
INFO 03-02 01:38:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:46 [logger.py:42] Received request cmpl-73673309748f46b1a4d8322d4d21e8b8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:46 [async_llm.py:261] Added request cmpl-73673309748f46b1a4d8322d4d21e8b8-0.
INFO 03-02 01:38:47 [logger.py:42] Received request cmpl-dff26d39e6ed4e808bd4079e7c19a437-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:47 [async_llm.py:261] Added request cmpl-dff26d39e6ed4e808bd4079e7c19a437-0.
INFO 03-02 01:38:48 [logger.py:42] Received request cmpl-5013f277b78e415caf2c8d3d34f35549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:48 [async_llm.py:261] Added request cmpl-5013f277b78e415caf2c8d3d34f35549-0.
INFO 03-02 01:38:49 [logger.py:42] Received request cmpl-3c9c16a8ceff4ab999e6f1340deea4ea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:49 [async_llm.py:261] Added request cmpl-3c9c16a8ceff4ab999e6f1340deea4ea-0.
INFO 03-02 01:38:50 [logger.py:42] Received request cmpl-b08b02dd112b4bcaafa53c3c7db91c8f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:50 [async_llm.py:261] Added request cmpl-b08b02dd112b4bcaafa53c3c7db91c8f-0.
INFO 03-02 01:38:51 [logger.py:42] Received request cmpl-96ec26a5fdab464390b66847b86c5008-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:51 [async_llm.py:261] Added request cmpl-96ec26a5fdab464390b66847b86c5008-0.
INFO 03-02 01:38:52 [logger.py:42] Received request cmpl-f8a3701a82e147738849ccb96b9019ab-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:52 [async_llm.py:261] Added request cmpl-f8a3701a82e147738849ccb96b9019ab-0.
INFO 03-02 01:38:53 [logger.py:42] Received request cmpl-a6fe798b4a414e9db2fce952bc570f87-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:53 [async_llm.py:261] Added request cmpl-a6fe798b4a414e9db2fce952bc570f87-0.
INFO 03-02 01:38:55 [logger.py:42] Received request cmpl-96ccd4751e114b7db9db77c1a3ef3529-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:55 [async_llm.py:261] Added request cmpl-96ccd4751e114b7db9db77c1a3ef3529-0.
INFO 03-02 01:38:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:38:56 [logger.py:42] Received request cmpl-99bee5385342449da49479b1a60059d4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:56 [async_llm.py:261] Added request cmpl-99bee5385342449da49479b1a60059d4-0.
INFO 03-02 01:38:57 [logger.py:42] Received request cmpl-88426e5a73884fe9998f4313591bc5ec-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:57 [async_llm.py:261] Added request cmpl-88426e5a73884fe9998f4313591bc5ec-0.
INFO 03-02 01:38:58 [logger.py:42] Received request cmpl-a292c62ab45e481086eaf852bf9f06c8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:58 [async_llm.py:261] Added request cmpl-a292c62ab45e481086eaf852bf9f06c8-0.
INFO 03-02 01:38:59 [logger.py:42] Received request cmpl-bee4c5ee66ba4c1583f54913b01ec1eb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:38:59 [async_llm.py:261] Added request cmpl-bee4c5ee66ba4c1583f54913b01ec1eb-0.
INFO 03-02 01:39:00 [logger.py:42] Received request cmpl-3b76b4efa3254cfdadbf04e29df195ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:00 [async_llm.py:261] Added request cmpl-3b76b4efa3254cfdadbf04e29df195ff-0.
INFO 03-02 01:39:01 [logger.py:42] Received request cmpl-72fc608976cf4239b4725c7ae16d6973-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:01 [async_llm.py:261] Added request cmpl-72fc608976cf4239b4725c7ae16d6973-0.
INFO 03-02 01:39:02 [logger.py:42] Received request cmpl-8c21095b4daf481eac3bd0148048a2f2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:02 [async_llm.py:261] Added request cmpl-8c21095b4daf481eac3bd0148048a2f2-0.
INFO 03-02 01:39:03 [logger.py:42] Received request cmpl-6ac7d4f280494ba3a94e759a885ef8e6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:03 [async_llm.py:261] Added request cmpl-6ac7d4f280494ba3a94e759a885ef8e6-0.
INFO 03-02 01:39:04 [logger.py:42] Received request cmpl-1aa2d6033bfb471ca836d7debadc6b1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:04 [async_llm.py:261] Added request cmpl-1aa2d6033bfb471ca836d7debadc6b1a-0.
INFO 03-02 01:39:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:06 [logger.py:42] Received request cmpl-5e90cd92547d4485b7ef698e56d83840-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:06 [async_llm.py:261] Added request cmpl-5e90cd92547d4485b7ef698e56d83840-0.
INFO 03-02 01:39:07 [logger.py:42] Received request cmpl-b6f5f4bfbfc8460db20fb4386fc5a596-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:07 [async_llm.py:261] Added request cmpl-b6f5f4bfbfc8460db20fb4386fc5a596-0.
INFO 03-02 01:39:08 [logger.py:42] Received request cmpl-0b744d98afb64961b28c79584e642296-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:08 [async_llm.py:261] Added request cmpl-0b744d98afb64961b28c79584e642296-0.
INFO 03-02 01:39:09 [logger.py:42] Received request cmpl-7e2aecdf4b9c40af9742f85d9d2f427e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:09 [async_llm.py:261] Added request cmpl-7e2aecdf4b9c40af9742f85d9d2f427e-0.
INFO 03-02 01:39:10 [logger.py:42] Received request cmpl-4dafe69c26ee420ca7477eaa7b6847d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:10 [async_llm.py:261] Added request cmpl-4dafe69c26ee420ca7477eaa7b6847d3-0.
INFO 03-02 01:39:11 [logger.py:42] Received request cmpl-ebd4195e66b14572a22ea7b4b444562d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:11 [async_llm.py:261] Added request cmpl-ebd4195e66b14572a22ea7b4b444562d-0.
INFO 03-02 01:39:12 [logger.py:42] Received request cmpl-5518de40f4474d8198d40636a4afce93-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:12 [async_llm.py:261] Added request cmpl-5518de40f4474d8198d40636a4afce93-0.
INFO 03-02 01:39:13 [logger.py:42] Received request cmpl-74cc0c2ec29449e4bb561d6f02058a1d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:13 [async_llm.py:261] Added request cmpl-74cc0c2ec29449e4bb561d6f02058a1d-0.
INFO 03-02 01:39:14 [logger.py:42] Received request cmpl-be132afc31c347a0bd756bec634cf77d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:14 [async_llm.py:261] Added request cmpl-be132afc31c347a0bd756bec634cf77d-0.
INFO 03-02 01:39:15 [logger.py:42] Received request cmpl-5842aed2d25041fca514e718d3b131d3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:15 [async_llm.py:261] Added request cmpl-5842aed2d25041fca514e718d3b131d3-0.
INFO 03-02 01:39:15 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:16 [logger.py:42] Received request cmpl-7ac4f2fc13954ee5850783eaa5a7d2b4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:16 [async_llm.py:261] Added request cmpl-7ac4f2fc13954ee5850783eaa5a7d2b4-0.
INFO 03-02 01:39:18 [logger.py:42] Received request cmpl-853f2121dc4440d7b9bd251ad4f12519-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:18 [async_llm.py:261] Added request cmpl-853f2121dc4440d7b9bd251ad4f12519-0.
INFO 03-02 01:39:19 [logger.py:42] Received request cmpl-80f7e98e765f4bf79a7850c941e60ad0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:19 [async_llm.py:261] Added request cmpl-80f7e98e765f4bf79a7850c941e60ad0-0.
INFO 03-02 01:39:20 [logger.py:42] Received request cmpl-050960c1cbfa4c5d936ea3b3e1d2aa7a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:20 [async_llm.py:261] Added request cmpl-050960c1cbfa4c5d936ea3b3e1d2aa7a-0.
INFO 03-02 01:39:21 [logger.py:42] Received request cmpl-48cf6a2e83164cd8989100c5c243d6bc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:21 [async_llm.py:261] Added request cmpl-48cf6a2e83164cd8989100c5c243d6bc-0.
INFO 03-02 01:39:22 [logger.py:42] Received request cmpl-aacfdc92c2154f399c39a01dde875e30-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:22 [async_llm.py:261] Added request cmpl-aacfdc92c2154f399c39a01dde875e30-0.
INFO 03-02 01:39:23 [logger.py:42] Received request cmpl-d311953ab299485785064c83de0c3417-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:23 [async_llm.py:261] Added request cmpl-d311953ab299485785064c83de0c3417-0.
INFO 03-02 01:39:24 [logger.py:42] Received request cmpl-0174350b520d48f09bbe28656939fde6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:24 [async_llm.py:261] Added request cmpl-0174350b520d48f09bbe28656939fde6-0.
INFO 03-02 01:39:25 [logger.py:42] Received request cmpl-4c4bec2a0f9a4b9cabfa5eb02f3bbe6d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:25 [async_llm.py:261] Added request cmpl-4c4bec2a0f9a4b9cabfa5eb02f3bbe6d-0.
INFO 03-02 01:39:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:26 [logger.py:42] Received request cmpl-6f8ddf88e13c483db0e204c7045f739b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:26 [async_llm.py:261] Added request cmpl-6f8ddf88e13c483db0e204c7045f739b-0.
INFO 03-02 01:39:27 [logger.py:42] Received request cmpl-aaf077dc299549d88474ee159c0c17cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:27 [async_llm.py:261] Added request cmpl-aaf077dc299549d88474ee159c0c17cb-0.
INFO 03-02 01:39:29 [logger.py:42] Received request cmpl-dca1a44dbc98494390528b54f1ed7b02-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:29 [async_llm.py:261] Added request cmpl-dca1a44dbc98494390528b54f1ed7b02-0.
INFO 03-02 01:39:30 [logger.py:42] Received request cmpl-56a9726352304054b3be9085d2b748dc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:30 [async_llm.py:261] Added request cmpl-56a9726352304054b3be9085d2b748dc-0.
INFO 03-02 01:39:31 [logger.py:42] Received request cmpl-549e2605105042ee8e1bbbbca42a3cf6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:31 [async_llm.py:261] Added request cmpl-549e2605105042ee8e1bbbbca42a3cf6-0.
INFO 03-02 01:39:32 [logger.py:42] Received request cmpl-415d3911a0fc4d0592c90bca472d1e3d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:32 [async_llm.py:261] Added request cmpl-415d3911a0fc4d0592c90bca472d1e3d-0.
INFO 03-02 01:39:33 [logger.py:42] Received request cmpl-a5b77421e47d4481be3134b21d0f9552-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:33 [async_llm.py:261] Added request cmpl-a5b77421e47d4481be3134b21d0f9552-0.
INFO 03-02 01:39:34 [logger.py:42] Received request cmpl-92c2ffe56a07461dbdadfe9268425988-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:34 [async_llm.py:261] Added request cmpl-92c2ffe56a07461dbdadfe9268425988-0.
INFO 03-02 01:39:35 [logger.py:42] Received request cmpl-53605e00b4b64fcb9ddff668937e8f0d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:35 [async_llm.py:261] Added request cmpl-53605e00b4b64fcb9ddff668937e8f0d-0.
INFO 03-02 01:39:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:36 [logger.py:42] Received request cmpl-cc122167423448f8b0543710e54b5525-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:36 [async_llm.py:261] Added request cmpl-cc122167423448f8b0543710e54b5525-0.
INFO 03-02 01:39:37 [logger.py:42] Received request cmpl-175ff60117fa425f94ea75879c56c0af-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:37 [async_llm.py:261] Added request cmpl-175ff60117fa425f94ea75879c56c0af-0.
INFO 03-02 01:39:38 [logger.py:42] Received request cmpl-1a38b9626eda48d8abec4e71296e7dcd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:38 [async_llm.py:261] Added request cmpl-1a38b9626eda48d8abec4e71296e7dcd-0.
INFO 03-02 01:39:40 [logger.py:42] Received request cmpl-141d49a41e624b389bb9184ac1346dd4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:40 [async_llm.py:261] Added request cmpl-141d49a41e624b389bb9184ac1346dd4-0.
INFO 03-02 01:39:41 [logger.py:42] Received request cmpl-c1fd3206813e4685a3b0360edc62e68f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:41 [async_llm.py:261] Added request cmpl-c1fd3206813e4685a3b0360edc62e68f-0.
INFO 03-02 01:39:42 [logger.py:42] Received request cmpl-e7120be92dad44a2bbdcc095f9cc3479-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:42 [async_llm.py:261] Added request cmpl-e7120be92dad44a2bbdcc095f9cc3479-0.
INFO 03-02 01:39:43 [logger.py:42] Received request cmpl-2dc6bc2c1b97473f8ada4588bfef1248-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:43 [async_llm.py:261] Added request cmpl-2dc6bc2c1b97473f8ada4588bfef1248-0.
INFO 03-02 01:39:44 [logger.py:42] Received request cmpl-5a2e1755d2864d63acba1ed22d0635cd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:44 [async_llm.py:261] Added request cmpl-5a2e1755d2864d63acba1ed22d0635cd-0.
INFO 03-02 01:39:45 [logger.py:42] Received request cmpl-78dd207ef8764f769d199460eab98ff4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:45 [async_llm.py:261] Added request cmpl-78dd207ef8764f769d199460eab98ff4-0.
INFO 03-02 01:39:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:46 [logger.py:42] Received request cmpl-f6e23d22838440c98db62ffe768cb35d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:46 [async_llm.py:261] Added request cmpl-f6e23d22838440c98db62ffe768cb35d-0.
INFO 03-02 01:39:47 [logger.py:42] Received request cmpl-7447df23feb54d09ae9aa2e818d1ea9e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:47 [async_llm.py:261] Added request cmpl-7447df23feb54d09ae9aa2e818d1ea9e-0.
INFO 03-02 01:39:48 [logger.py:42] Received request cmpl-dd4b7755a7f24c46aa657a9b55dd50f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:48 [async_llm.py:261] Added request cmpl-dd4b7755a7f24c46aa657a9b55dd50f5-0.
INFO 03-02 01:39:49 [logger.py:42] Received request cmpl-c28a7925a2674528909c44914303f01b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:49 [async_llm.py:261] Added request cmpl-c28a7925a2674528909c44914303f01b-0.
INFO 03-02 01:39:50 [logger.py:42] Received request cmpl-5811649c21f14b9a97b3a4cfc3527595-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:50 [async_llm.py:261] Added request cmpl-5811649c21f14b9a97b3a4cfc3527595-0.
INFO 03-02 01:39:52 [logger.py:42] Received request cmpl-a28bcdc682c94878abc9ba755c738fe5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:52 [async_llm.py:261] Added request cmpl-a28bcdc682c94878abc9ba755c738fe5-0.
INFO 03-02 01:39:53 [logger.py:42] Received request cmpl-8a136885e2d649a484eaf0e14ce72b17-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:53 [async_llm.py:261] Added request cmpl-8a136885e2d649a484eaf0e14ce72b17-0.
INFO 03-02 01:39:54 [logger.py:42] Received request cmpl-0f298332f1984796b3447b20b4bd16c1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:54 [async_llm.py:261] Added request cmpl-0f298332f1984796b3447b20b4bd16c1-0.
INFO 03-02 01:39:55 [logger.py:42] Received request cmpl-1cb674dcacf447e9bbb29d04359aa0d8-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:55 [async_llm.py:261] Added request cmpl-1cb674dcacf447e9bbb29d04359aa0d8-0.
INFO 03-02 01:39:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:39:56 [logger.py:42] Received request cmpl-5f334a06b9e144c3b4ba92ef0ef4e2e9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:56 [async_llm.py:261] Added request cmpl-5f334a06b9e144c3b4ba92ef0ef4e2e9-0.
INFO 03-02 01:39:57 [logger.py:42] Received request cmpl-75a2a68c184e457ca3e8e38a94a625d6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:57 [async_llm.py:261] Added request cmpl-75a2a68c184e457ca3e8e38a94a625d6-0.
INFO 03-02 01:39:58 [logger.py:42] Received request cmpl-0711c106ab5f4f0ea9f463332e9eefc9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:58 [async_llm.py:261] Added request cmpl-0711c106ab5f4f0ea9f463332e9eefc9-0.
INFO 03-02 01:39:59 [logger.py:42] Received request cmpl-dea9fe39f390494e942f1a197591e6bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:39:59 [async_llm.py:261] Added request cmpl-dea9fe39f390494e942f1a197591e6bd-0.
INFO 03-02 01:40:00 [logger.py:42] Received request cmpl-ccb50a8d32be47b28c30c3d538098a5a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:00 [async_llm.py:261] Added request cmpl-ccb50a8d32be47b28c30c3d538098a5a-0.
INFO 03-02 01:40:01 [logger.py:42] Received request cmpl-00ef0a1cf0874468a0379af8bfafe1ff-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:01 [async_llm.py:261] Added request cmpl-00ef0a1cf0874468a0379af8bfafe1ff-0.
INFO 03-02 01:40:03 [logger.py:42] Received request cmpl-f75cf201ec33440bbc214bcc95d3cb9c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:03 [async_llm.py:261] Added request cmpl-f75cf201ec33440bbc214bcc95d3cb9c-0.
INFO 03-02 01:40:04 [logger.py:42] Received request cmpl-d96c9732503a495f89d6e15186687f4c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:04 [async_llm.py:261] Added request cmpl-d96c9732503a495f89d6e15186687f4c-0.
INFO 03-02 01:40:05 [logger.py:42] Received request cmpl-68eb77d19d2546e79462789bb128381d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:05 [async_llm.py:261] Added request cmpl-68eb77d19d2546e79462789bb128381d-0.
INFO 03-02 01:40:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:06 [logger.py:42] Received request cmpl-381c994dac7147aa850aff394f24027d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:06 [async_llm.py:261] Added request cmpl-381c994dac7147aa850aff394f24027d-0.
INFO 03-02 01:40:07 [logger.py:42] Received request cmpl-f2c4e11e7f2743bf922870da30fbcf5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:07 [async_llm.py:261] Added request cmpl-f2c4e11e7f2743bf922870da30fbcf5e-0.
INFO 03-02 01:40:08 [logger.py:42] Received request cmpl-ca84f6368af6461a9326896328f19201-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:08 [async_llm.py:261] Added request cmpl-ca84f6368af6461a9326896328f19201-0.
INFO 03-02 01:40:09 [logger.py:42] Received request cmpl-62b96b6f31584ef5b47c4cd31c51aedd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:09 [async_llm.py:261] Added request cmpl-62b96b6f31584ef5b47c4cd31c51aedd-0.
INFO 03-02 01:40:10 [logger.py:42] Received request cmpl-9c6bfcf31c584a6ab94d1ba65fd3f18e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:10 [async_llm.py:261] Added request cmpl-9c6bfcf31c584a6ab94d1ba65fd3f18e-0.
INFO 03-02 01:40:11 [logger.py:42] Received request cmpl-ecfaa2b9c6ed4c5182a0c5b77b698c40-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:11 [async_llm.py:261] Added request cmpl-ecfaa2b9c6ed4c5182a0c5b77b698c40-0.
INFO 03-02 01:40:12 [logger.py:42] Received request cmpl-7cf79fc3104544d7a076766636b3592f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:12 [async_llm.py:261] Added request cmpl-7cf79fc3104544d7a076766636b3592f-0.
INFO 03-02 01:40:13 [logger.py:42] Received request cmpl-dcee97ef2e2845f5beda320cf063561d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:13 [async_llm.py:261] Added request cmpl-dcee97ef2e2845f5beda320cf063561d-0.
INFO 03-02 01:40:15 [logger.py:42] Received request cmpl-90acd7797dfd49d7bba7ca4ef4c53ee9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:15 [async_llm.py:261] Added request cmpl-90acd7797dfd49d7bba7ca4ef4c53ee9-0.
INFO 03-02 01:40:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:16 [logger.py:42] Received request cmpl-9de5b8b848884afe8d844e2fefee5c13-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:16 [async_llm.py:261] Added request cmpl-9de5b8b848884afe8d844e2fefee5c13-0.
INFO 03-02 01:40:17 [logger.py:42] Received request cmpl-6a536657f9934b4b8abc00f27189172d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:17 [async_llm.py:261] Added request cmpl-6a536657f9934b4b8abc00f27189172d-0.
INFO 03-02 01:40:18 [logger.py:42] Received request cmpl-0a4df0abc55c46a58a4c9c8195b41c42-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:18 [async_llm.py:261] Added request cmpl-0a4df0abc55c46a58a4c9c8195b41c42-0.
INFO 03-02 01:40:19 [logger.py:42] Received request cmpl-0e00efcacfe542ae84753254d8a910bd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:19 [async_llm.py:261] Added request cmpl-0e00efcacfe542ae84753254d8a910bd-0.
INFO 03-02 01:40:20 [logger.py:42] Received request cmpl-80653427e1d74e6682893b604a7b1aac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:20 [async_llm.py:261] Added request cmpl-80653427e1d74e6682893b604a7b1aac-0.
INFO 03-02 01:40:21 [logger.py:42] Received request cmpl-6383babfbf02490393d5300875bd2e34-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:21 [async_llm.py:261] Added request cmpl-6383babfbf02490393d5300875bd2e34-0.
INFO 03-02 01:40:22 [logger.py:42] Received request cmpl-e386c84ed4074e179334406ccc57242f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:22 [async_llm.py:261] Added request cmpl-e386c84ed4074e179334406ccc57242f-0.
INFO 03-02 01:40:23 [logger.py:42] Received request cmpl-e9a8b7e7d1f6409f8bef2d904399e001-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:23 [async_llm.py:261] Added request cmpl-e9a8b7e7d1f6409f8bef2d904399e001-0.
INFO 03-02 01:40:24 [logger.py:42] Received request cmpl-a527e52bf1c7434f99d482c5afdba949-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:24 [async_llm.py:261] Added request cmpl-a527e52bf1c7434f99d482c5afdba949-0.
INFO 03-02 01:40:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:26 [logger.py:42] Received request cmpl-df27172d90ed4c309a85fc1e7b7854dd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:26 [async_llm.py:261] Added request cmpl-df27172d90ed4c309a85fc1e7b7854dd-0.
INFO 03-02 01:40:27 [logger.py:42] Received request cmpl-9652db8933b747bfbdcd195552a18178-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:27 [async_llm.py:261] Added request cmpl-9652db8933b747bfbdcd195552a18178-0.
INFO 03-02 01:40:28 [logger.py:42] Received request cmpl-4058cbf745914beb8e5522f3e3d9d626-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:28 [async_llm.py:261] Added request cmpl-4058cbf745914beb8e5522f3e3d9d626-0.
INFO 03-02 01:40:29 [logger.py:42] Received request cmpl-f7073b2b30044c9189aa97cdb2f19a68-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:29 [async_llm.py:261] Added request cmpl-f7073b2b30044c9189aa97cdb2f19a68-0.
INFO 03-02 01:40:30 [logger.py:42] Received request cmpl-e9f1f85e6efb45b887b42ea4fbf54741-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:30 [async_llm.py:261] Added request cmpl-e9f1f85e6efb45b887b42ea4fbf54741-0.
INFO 03-02 01:40:31 [logger.py:42] Received request cmpl-078f6c7a30784e8382f9b78327c270e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:31 [async_llm.py:261] Added request cmpl-078f6c7a30784e8382f9b78327c270e5-0.
INFO 03-02 01:40:32 [logger.py:42] Received request cmpl-0d02377433154e0ba9cd82fa2cb7d93a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:32 [async_llm.py:261] Added request cmpl-0d02377433154e0ba9cd82fa2cb7d93a-0.
INFO 03-02 01:40:33 [logger.py:42] Received request cmpl-509acf13d76247b097c4e1216ed70bf2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:33 [async_llm.py:261] Added request cmpl-509acf13d76247b097c4e1216ed70bf2-0.
INFO 03-02 01:40:34 [logger.py:42] Received request cmpl-595c1a49ec074379ad4590152e4181b3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:34 [async_llm.py:261] Added request cmpl-595c1a49ec074379ad4590152e4181b3-0.
INFO 03-02 01:40:35 [logger.py:42] Received request cmpl-cc31f8c461564e63ae87638e7bedd61d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:35 [async_llm.py:261] Added request cmpl-cc31f8c461564e63ae87638e7bedd61d-0.
INFO 03-02 01:40:35 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:36 [logger.py:42] Received request cmpl-0d6b3be271944ba8aec87c8691f9dc3c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:37 [async_llm.py:261] Added request cmpl-0d6b3be271944ba8aec87c8691f9dc3c-0.
INFO 03-02 01:40:38 [logger.py:42] Received request cmpl-7aa1d8f751eb44bd959c0b9c8bba5d09-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:38 [async_llm.py:261] Added request cmpl-7aa1d8f751eb44bd959c0b9c8bba5d09-0.
INFO 03-02 01:40:39 [logger.py:42] Received request cmpl-0a7b51d3d5bc4da89d8133f450b8a71f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:39 [async_llm.py:261] Added request cmpl-0a7b51d3d5bc4da89d8133f450b8a71f-0.
INFO 03-02 01:40:40 [logger.py:42] Received request cmpl-8d8c32c18e9b44e7a74375f58b986fa1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:40 [async_llm.py:261] Added request cmpl-8d8c32c18e9b44e7a74375f58b986fa1-0.
INFO 03-02 01:40:41 [logger.py:42] Received request cmpl-19f7b59d9f3c494e99719924c846f23b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:41 [async_llm.py:261] Added request cmpl-19f7b59d9f3c494e99719924c846f23b-0.
INFO 03-02 01:40:42 [logger.py:42] Received request cmpl-f61cee6b3f2544dea58d7cc03b2eb095-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:42 [async_llm.py:261] Added request cmpl-f61cee6b3f2544dea58d7cc03b2eb095-0.
INFO 03-02 01:40:43 [logger.py:42] Received request cmpl-98d0af8547e448b395e3fa61d588aed9-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:43 [async_llm.py:261] Added request cmpl-98d0af8547e448b395e3fa61d588aed9-0.
INFO 03-02 01:40:44 [logger.py:42] Received request cmpl-68d2cfdb1d844a34836098cca27a5913-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:44 [async_llm.py:261] Added request cmpl-68d2cfdb1d844a34836098cca27a5913-0.
INFO 03-02 01:40:45 [logger.py:42] Received request cmpl-2a90ab9ed06d408e8391b53a2a19b7fe-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:45 [async_llm.py:261] Added request cmpl-2a90ab9ed06d408e8391b53a2a19b7fe-0.
INFO 03-02 01:40:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.9 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:46 [logger.py:42] Received request cmpl-adc0ce135bdd4df0b2e9fb87141d4674-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:46 [async_llm.py:261] Added request cmpl-adc0ce135bdd4df0b2e9fb87141d4674-0.
INFO 03-02 01:40:47 [logger.py:42] Received request cmpl-b742cffb69924dcea1c1d7d6b072814e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:47 [async_llm.py:261] Added request cmpl-b742cffb69924dcea1c1d7d6b072814e-0.
INFO 03-02 01:40:49 [logger.py:42] Received request cmpl-1d84c4766e6c49a993373e0fe88bdbde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:49 [async_llm.py:261] Added request cmpl-1d84c4766e6c49a993373e0fe88bdbde-0.
INFO 03-02 01:40:50 [logger.py:42] Received request cmpl-2d56ccd545d0431b94f0cbd2d89ed87f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:50 [async_llm.py:261] Added request cmpl-2d56ccd545d0431b94f0cbd2d89ed87f-0.
INFO 03-02 01:40:51 [logger.py:42] Received request cmpl-4d9e4e7ccad846eaae32e55fdc50156c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:51 [async_llm.py:261] Added request cmpl-4d9e4e7ccad846eaae32e55fdc50156c-0.
INFO 03-02 01:40:52 [logger.py:42] Received request cmpl-f4f646aa1d2847c39b538b2d217b1786-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:52 [async_llm.py:261] Added request cmpl-f4f646aa1d2847c39b538b2d217b1786-0.
INFO 03-02 01:40:53 [logger.py:42] Received request cmpl-d70d471e1cd44aa0add0ccb167f88088-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:53 [async_llm.py:261] Added request cmpl-d70d471e1cd44aa0add0ccb167f88088-0.
INFO 03-02 01:40:54 [logger.py:42] Received request cmpl-5162beb3e14e4c1b9fd8cc073fe6578b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:54 [async_llm.py:261] Added request cmpl-5162beb3e14e4c1b9fd8cc073fe6578b-0.
INFO 03-02 01:40:55 [logger.py:42] Received request cmpl-a654254e3815489296aa613bfff4a39d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:55 [async_llm.py:261] Added request cmpl-a654254e3815489296aa613bfff4a39d-0.
INFO 03-02 01:40:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:40:56 [logger.py:42] Received request cmpl-a51ffcfb22494b929688da5c7a595c59-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:56 [async_llm.py:261] Added request cmpl-a51ffcfb22494b929688da5c7a595c59-0.
INFO 03-02 01:40:57 [logger.py:42] Received request cmpl-48c25c31b9234650a67f02591c086584-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:57 [async_llm.py:261] Added request cmpl-48c25c31b9234650a67f02591c086584-0.
INFO 03-02 01:40:58 [logger.py:42] Received request cmpl-483882ea2f3c46adbfdb68b05a4b2b78-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:40:58 [async_llm.py:261] Added request cmpl-483882ea2f3c46adbfdb68b05a4b2b78-0.
INFO 03-02 01:41:00 [logger.py:42] Received request cmpl-f406829fd0e64995bfb7d06093cc7bc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:00 [async_llm.py:261] Added request cmpl-f406829fd0e64995bfb7d06093cc7bc0-0.
INFO 03-02 01:41:01 [logger.py:42] Received request cmpl-6d90c8140ac74530ad42aa3165436576-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:01 [async_llm.py:261] Added request cmpl-6d90c8140ac74530ad42aa3165436576-0.
INFO 03-02 01:41:02 [logger.py:42] Received request cmpl-bc29bdb40494470590300fbc8a29b5a2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:02 [async_llm.py:261] Added request cmpl-bc29bdb40494470590300fbc8a29b5a2-0.
INFO 03-02 01:41:03 [logger.py:42] Received request cmpl-cc79a14244ba4e95a195719830663b1b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:03 [async_llm.py:261] Added request cmpl-cc79a14244ba4e95a195719830663b1b-0.
INFO 03-02 01:41:04 [logger.py:42] Received request cmpl-35f35e3f5e264555bd4da6cf755561cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:04 [async_llm.py:261] Added request cmpl-35f35e3f5e264555bd4da6cf755561cc-0.
INFO 03-02 01:41:05 [logger.py:42] Received request cmpl-8408cf0daf9a40e58d7e91cebffe406c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:05 [async_llm.py:261] Added request cmpl-8408cf0daf9a40e58d7e91cebffe406c-0.
INFO 03-02 01:41:05 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:06 [logger.py:42] Received request cmpl-b007447a4e83486a8699af1304ef3c97-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:06 [async_llm.py:261] Added request cmpl-b007447a4e83486a8699af1304ef3c97-0.
INFO 03-02 01:41:07 [logger.py:42] Received request cmpl-ee3d3fc34108457187889fbc9d725346-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:07 [async_llm.py:261] Added request cmpl-ee3d3fc34108457187889fbc9d725346-0.
INFO 03-02 01:41:08 [logger.py:42] Received request cmpl-58b42ce82ab6469f800057ea3820e929-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:08 [async_llm.py:261] Added request cmpl-58b42ce82ab6469f800057ea3820e929-0.
INFO 03-02 01:41:09 [logger.py:42] Received request cmpl-5a8a1ed314ed441baae9d1aa47c8259e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:09 [async_llm.py:261] Added request cmpl-5a8a1ed314ed441baae9d1aa47c8259e-0.
INFO 03-02 01:41:10 [logger.py:42] Received request cmpl-ee1b47305320421bba5555d007710987-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:10 [async_llm.py:261] Added request cmpl-ee1b47305320421bba5555d007710987-0.
INFO 03-02 01:41:12 [logger.py:42] Received request cmpl-30db2f69e3bf4329a9fbeed27725ad56-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:12 [async_llm.py:261] Added request cmpl-30db2f69e3bf4329a9fbeed27725ad56-0.
INFO 03-02 01:41:13 [logger.py:42] Received request cmpl-a52ab3683ba543c0b2405956d657f49c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:13 [async_llm.py:261] Added request cmpl-a52ab3683ba543c0b2405956d657f49c-0.
INFO 03-02 01:41:14 [logger.py:42] Received request cmpl-e92ccb3c000e415eb1f824f5a58e3019-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:14 [async_llm.py:261] Added request cmpl-e92ccb3c000e415eb1f824f5a58e3019-0.
INFO 03-02 01:41:15 [logger.py:42] Received request cmpl-e360e6b4a6db4a07b8637de04dbdddfb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:15 [async_llm.py:261] Added request cmpl-e360e6b4a6db4a07b8637de04dbdddfb-0.
INFO 03-02 01:41:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:16 [logger.py:42] Received request cmpl-f048e02241684d4bba26e6b9f04fe352-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:16 [async_llm.py:261] Added request cmpl-f048e02241684d4bba26e6b9f04fe352-0.
INFO 03-02 01:41:17 [logger.py:42] Received request cmpl-d76a3998d54f4d71875c9ed759f96fdc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:17 [async_llm.py:261] Added request cmpl-d76a3998d54f4d71875c9ed759f96fdc-0.
INFO 03-02 01:41:18 [logger.py:42] Received request cmpl-4a4f3eda0c9340bf88838b1d333ed775-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:18 [async_llm.py:261] Added request cmpl-4a4f3eda0c9340bf88838b1d333ed775-0.
INFO 03-02 01:41:19 [logger.py:42] Received request cmpl-a041f0e622e44f7a8846fd812fd21eac-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:19 [async_llm.py:261] Added request cmpl-a041f0e622e44f7a8846fd812fd21eac-0.
INFO 03-02 01:41:20 [logger.py:42] Received request cmpl-6553c38438a8463599a75ef2058ade5d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:20 [async_llm.py:261] Added request cmpl-6553c38438a8463599a75ef2058ade5d-0.
INFO 03-02 01:41:21 [logger.py:42] Received request cmpl-eebe7214f7d6461e81c9f247851eabbf-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:21 [async_llm.py:261] Added request cmpl-eebe7214f7d6461e81c9f247851eabbf-0.
INFO 03-02 01:41:23 [logger.py:42] Received request cmpl-0d44042d473a400586b644100f2c5b72-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:23 [async_llm.py:261] Added request cmpl-0d44042d473a400586b644100f2c5b72-0.
INFO 03-02 01:41:24 [logger.py:42] Received request cmpl-a21aca811c594e1cb1cc0826a327baaa-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:24 [async_llm.py:261] Added request cmpl-a21aca811c594e1cb1cc0826a327baaa-0.
INFO 03-02 01:41:25 [logger.py:42] Received request cmpl-c5b49c41d8d4408fb35bade78201ebde-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:25 [async_llm.py:261] Added request cmpl-c5b49c41d8d4408fb35bade78201ebde-0.
INFO 03-02 01:41:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:26 [logger.py:42] Received request cmpl-52f0147b84bf460ab9ae1086c70dffc5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:26 [async_llm.py:261] Added request cmpl-52f0147b84bf460ab9ae1086c70dffc5-0.
INFO 03-02 01:41:27 [logger.py:42] Received request cmpl-86de71a09567469d9b46184e959fadc0-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:27 [async_llm.py:261] Added request cmpl-86de71a09567469d9b46184e959fadc0-0.
INFO 03-02 01:41:28 [logger.py:42] Received request cmpl-c7eb8dc05c4a4a1eb454fc165283c60a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:28 [async_llm.py:261] Added request cmpl-c7eb8dc05c4a4a1eb454fc165283c60a-0.
INFO 03-02 01:41:29 [logger.py:42] Received request cmpl-14a879cdd7e34627a1d04eaa1df3018c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:29 [async_llm.py:261] Added request cmpl-14a879cdd7e34627a1d04eaa1df3018c-0.
INFO 03-02 01:41:30 [logger.py:42] Received request cmpl-9dd10be37fc641ddb5d855c5ffd9791b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:30 [async_llm.py:261] Added request cmpl-9dd10be37fc641ddb5d855c5ffd9791b-0.
INFO 03-02 01:41:31 [logger.py:42] Received request cmpl-0dbac17444d44aa29f21dcaec2482404-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:31 [async_llm.py:261] Added request cmpl-0dbac17444d44aa29f21dcaec2482404-0.
INFO 03-02 01:41:32 [logger.py:42] Received request cmpl-15b7e2b6270240f58a0c87838f1b5b5e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:32 [async_llm.py:261] Added request cmpl-15b7e2b6270240f58a0c87838f1b5b5e-0.
INFO 03-02 01:41:34 [logger.py:42] Received request cmpl-d49004dc76d8445e87e8f52878946c22-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:34 [async_llm.py:261] Added request cmpl-d49004dc76d8445e87e8f52878946c22-0.
INFO 03-02 01:41:35 [logger.py:42] Received request cmpl-bb0ca237626844a89452b7a3b76fbdea-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:35 [async_llm.py:261] Added request cmpl-bb0ca237626844a89452b7a3b76fbdea-0.
INFO 03-02 01:41:35 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:36 [logger.py:42] Received request cmpl-6c6daabac5874e8ab50a3f1639416374-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:36 [async_llm.py:261] Added request cmpl-6c6daabac5874e8ab50a3f1639416374-0.
INFO 03-02 01:41:37 [logger.py:42] Received request cmpl-8658eae00d7e4401a19a601f3c8179e5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:37 [async_llm.py:261] Added request cmpl-8658eae00d7e4401a19a601f3c8179e5-0.
INFO 03-02 01:41:38 [logger.py:42] Received request cmpl-90e14088c7264d128982e9a5d17f3053-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:38 [async_llm.py:261] Added request cmpl-90e14088c7264d128982e9a5d17f3053-0.
INFO 03-02 01:41:39 [logger.py:42] Received request cmpl-dedbd82562da47e6beb049b59e223ef2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:39 [async_llm.py:261] Added request cmpl-dedbd82562da47e6beb049b59e223ef2-0.
INFO 03-02 01:41:40 [logger.py:42] Received request cmpl-5ec47c68c83a42988d4b37346c05927e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:40 [async_llm.py:261] Added request cmpl-5ec47c68c83a42988d4b37346c05927e-0.
INFO 03-02 01:41:41 [logger.py:42] Received request cmpl-55f0bf41c8f240819f8c19e4b3f038f7-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:41 [async_llm.py:261] Added request cmpl-55f0bf41c8f240819f8c19e4b3f038f7-0.
INFO 03-02 01:41:42 [logger.py:42] Received request cmpl-d08c3dc6d95e4d48b69773dfb62e4d3e-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:42 [async_llm.py:261] Added request cmpl-d08c3dc6d95e4d48b69773dfb62e4d3e-0.
INFO 03-02 01:41:43 [logger.py:42] Received request cmpl-08b8247bc6554ae9b4bb44020d4c07e4-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:43 [async_llm.py:261] Added request cmpl-08b8247bc6554ae9b4bb44020d4c07e4-0.
INFO 03-02 01:41:44 [logger.py:42] Received request cmpl-f50ea32961064ed6bd43a88305df828f-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:44 [async_llm.py:261] Added request cmpl-f50ea32961064ed6bd43a88305df828f-0.
INFO 03-02 01:41:45 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:46 [logger.py:42] Received request cmpl-1cbad5227bae4076b5ba54cffa61146b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:46 [async_llm.py:261] Added request cmpl-1cbad5227bae4076b5ba54cffa61146b-0.
INFO 03-02 01:41:47 [logger.py:42] Received request cmpl-3e9a3ffef2df45eba7727ed91fd7b348-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:47 [async_llm.py:261] Added request cmpl-3e9a3ffef2df45eba7727ed91fd7b348-0.
INFO 03-02 01:41:48 [logger.py:42] Received request cmpl-5183a58e8e3d45928f089041adec1967-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:48 [async_llm.py:261] Added request cmpl-5183a58e8e3d45928f089041adec1967-0.
INFO 03-02 01:41:49 [logger.py:42] Received request cmpl-1a1e5c0d1d554e23b993a8d515e67218-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:49 [async_llm.py:261] Added request cmpl-1a1e5c0d1d554e23b993a8d515e67218-0.
INFO 03-02 01:41:50 [logger.py:42] Received request cmpl-af49d0d076cf4b089e9664d44b540e43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:50 [async_llm.py:261] Added request cmpl-af49d0d076cf4b089e9664d44b540e43-0.
INFO 03-02 01:41:51 [logger.py:42] Received request cmpl-e04b2e04aa204348957733de66e28ea1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:51 [async_llm.py:261] Added request cmpl-e04b2e04aa204348957733de66e28ea1-0.
INFO 03-02 01:41:52 [logger.py:42] Received request cmpl-3e9eb4b35cca4bc9a718044921b7a7cc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:52 [async_llm.py:261] Added request cmpl-3e9eb4b35cca4bc9a718044921b7a7cc-0.
INFO 03-02 01:41:53 [logger.py:42] Received request cmpl-9e6e71a1120a47f79107a89afdbf48a3-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:53 [async_llm.py:261] Added request cmpl-9e6e71a1120a47f79107a89afdbf48a3-0.
INFO 03-02 01:41:54 [logger.py:42] Received request cmpl-6a0a1a0b12f24e37bd895bb5d1ef76d2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:54 [async_llm.py:261] Added request cmpl-6a0a1a0b12f24e37bd895bb5d1ef76d2-0.
INFO 03-02 01:41:55 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:41:55 [logger.py:42] Received request cmpl-fb7f787f96714f948480413c09d39d43-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:55 [async_llm.py:261] Added request cmpl-fb7f787f96714f948480413c09d39d43-0.
INFO 03-02 01:41:57 [logger.py:42] Received request cmpl-0a9e11e3d58b4e308d649025d909acd5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:57 [async_llm.py:261] Added request cmpl-0a9e11e3d58b4e308d649025d909acd5-0.
INFO 03-02 01:41:58 [logger.py:42] Received request cmpl-f7d1f9866c524c7ebccc5916104c7549-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:58 [async_llm.py:261] Added request cmpl-f7d1f9866c524c7ebccc5916104c7549-0.
INFO 03-02 01:41:59 [logger.py:42] Received request cmpl-dca11b56602640779366a64f4dabeceb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:41:59 [async_llm.py:261] Added request cmpl-dca11b56602640779366a64f4dabeceb-0.
INFO 03-02 01:42:00 [logger.py:42] Received request cmpl-cab7a6c868094b21bc51cf994869da66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:00 [async_llm.py:261] Added request cmpl-cab7a6c868094b21bc51cf994869da66-0.
INFO 03-02 01:42:01 [logger.py:42] Received request cmpl-7a86b26329ea47368946ff167f225472-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:01 [async_llm.py:261] Added request cmpl-7a86b26329ea47368946ff167f225472-0.
INFO 03-02 01:42:02 [logger.py:42] Received request cmpl-da188d2898bb453bb61f5e7515ce844a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:02 [async_llm.py:261] Added request cmpl-da188d2898bb453bb61f5e7515ce844a-0.
INFO 03-02 01:42:03 [logger.py:42] Received request cmpl-388ad01b4ccf4e408dd5919fc86e975d-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:03 [async_llm.py:261] Added request cmpl-388ad01b4ccf4e408dd5919fc86e975d-0.
INFO 03-02 01:42:04 [logger.py:42] Received request cmpl-427e9d9ad8d4477c8da841854f4146b5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:04 [async_llm.py:261] Added request cmpl-427e9d9ad8d4477c8da841854f4146b5-0.
INFO 03-02 01:42:05 [logger.py:42] Received request cmpl-6c965d3580114834b348277845912b54-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:05 [async_llm.py:261] Added request cmpl-6c965d3580114834b348277845912b54-0.
INFO 03-02 01:42:05 [loggers.py:116] Engine 000: Avg prompt throughput: 7.0 tokens/s, Avg generation throughput: 5.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:06 [logger.py:42] Received request cmpl-d8445f8c1b514953b42446da6781ba62-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:06 [async_llm.py:261] Added request cmpl-d8445f8c1b514953b42446da6781ba62-0.
INFO 03-02 01:42:07 [logger.py:42] Received request cmpl-454407ef283a41f89631b812c458379b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:07 [async_llm.py:261] Added request cmpl-454407ef283a41f89631b812c458379b-0.
INFO 03-02 01:42:09 [logger.py:42] Received request cmpl-6da7e533ccd141d3a879b64ff155b4f5-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:09 [async_llm.py:261] Added request cmpl-6da7e533ccd141d3a879b64ff155b4f5-0.
INFO 03-02 01:42:10 [logger.py:42] Received request cmpl-a13e07eae4de4d64ae70e5ed29063400-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:10 [async_llm.py:261] Added request cmpl-a13e07eae4de4d64ae70e5ed29063400-0.
INFO 03-02 01:42:11 [logger.py:42] Received request cmpl-cb9cb75e7374403395d30085026aba66-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:11 [async_llm.py:261] Added request cmpl-cb9cb75e7374403395d30085026aba66-0.
INFO 03-02 01:42:12 [logger.py:42] Received request cmpl-02eee6ac61de4ad69ae38f465c6d2a73-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:12 [async_llm.py:261] Added request cmpl-02eee6ac61de4ad69ae38f465c6d2a73-0.
INFO 03-02 01:42:13 [logger.py:42] Received request cmpl-bc53cae8d192495fb0b9504a20356312-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:13 [async_llm.py:261] Added request cmpl-bc53cae8d192495fb0b9504a20356312-0.
INFO 03-02 01:42:14 [logger.py:42] Received request cmpl-6dac251606f74b2a9408cf6ae611e6cb-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:14 [async_llm.py:261] Added request cmpl-6dac251606f74b2a9408cf6ae611e6cb-0.
INFO 03-02 01:42:15 [logger.py:42] Received request cmpl-e0b7b11329de451e811dbb137361f90b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:15 [async_llm.py:261] Added request cmpl-e0b7b11329de451e811dbb137361f90b-0.
INFO 03-02 01:42:15 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:16 [logger.py:42] Received request cmpl-3db5bc827e934184859efca8369a407c-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:16 [async_llm.py:261] Added request cmpl-3db5bc827e934184859efca8369a407c-0.
INFO 03-02 01:42:17 [logger.py:42] Received request cmpl-7daca3d26741490a9222c4ec5fc1f0e2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:17 [async_llm.py:261] Added request cmpl-7daca3d26741490a9222c4ec5fc1f0e2-0.
INFO 03-02 01:42:18 [logger.py:42] Received request cmpl-d2efbd0fde9147c6af3f8f93271501ed-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:18 [async_llm.py:261] Added request cmpl-d2efbd0fde9147c6af3f8f93271501ed-0.
INFO 03-02 01:42:20 [logger.py:42] Received request cmpl-bab2158c4acd4be081819d47ac0a6393-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:20 [async_llm.py:261] Added request cmpl-bab2158c4acd4be081819d47ac0a6393-0.
INFO 03-02 01:42:21 [logger.py:42] Received request cmpl-932b85c969da4bb8b22a83565db66cdd-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:21 [async_llm.py:261] Added request cmpl-932b85c969da4bb8b22a83565db66cdd-0.
INFO 03-02 01:42:22 [logger.py:42] Received request cmpl-018709ab9d6b4ace99cbc101c00594fc-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:22 [async_llm.py:261] Added request cmpl-018709ab9d6b4ace99cbc101c00594fc-0.
INFO 03-02 01:42:23 [logger.py:42] Received request cmpl-c3d17b0adbc94442a2137534ec17e51b-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:23 [async_llm.py:261] Added request cmpl-c3d17b0adbc94442a2137534ec17e51b-0.
INFO 03-02 01:42:24 [logger.py:42] Received request cmpl-658c76ab27e342b2b7952b70a0673f1a-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:24 [async_llm.py:261] Added request cmpl-658c76ab27e342b2b7952b70a0673f1a-0.
INFO 03-02 01:42:25 [logger.py:42] Received request cmpl-36ae7fe77c72437d8c4fd7ec6b840fd2-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:25 [async_llm.py:261] Added request cmpl-36ae7fe77c72437d8c4fd7ec6b840fd2-0.
INFO 03-02 01:42:25 [loggers.py:116] Engine 000: Avg prompt throughput: 6.3 tokens/s, Avg generation throughput: 4.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-02 01:42:26 [logger.py:42] Received request cmpl-0ed3089aed87497083b05743baeb0df6-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:26 [async_llm.py:261] Added request cmpl-0ed3089aed87497083b05743baeb0df6-0.
INFO 03-02 01:42:27 [logger.py:42] Received request cmpl-3ac965881369432dabfafcbdb47d7638-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:27 [async_llm.py:261] Added request cmpl-3ac965881369432dabfafcbdb47d7638-0.
INFO 03-02 01:42:28 [logger.py:42] Received request cmpl-ed51a9a409db44fc9596441ca0e826b1-0: prompt: 'write a quick sort algorithm.', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=5, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [128000, 5040, 264, 4062, 3460, 12384, 13], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO:  1.2.3.5:1235 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-02 01:42:28 [async_llm.py:261] Added request cmpl-ed51a9a409db44fc9596441ca0e826b1-0.